- Products
- Learn
- Local User Groups
- Partners
- More
The Great Exposure Reset
24 February 2026 @ 5pm CET / 11am EST
CheckMates Fest 2026
Watch Now!AI Security Masters
Hacking with AI: The Dark Side of Innovation
CheckMates Go:
CheckMates Fest
Hey guys,
I really hope someone might be able to give some sigguestion/opinion on this, as to me, it makes no logical sense why this fails...could be because of mdps, not really sure. Anyway, to make long story short, customer is replacing their existing 4 15000 fws with new 4 9700 devices (2 separate clusters). We did migrate export from existing mgmt, imported to new one, connected both new clusters, built basic policy after setting up mdps, with ONLY 2 interfaces active (mgmt and sync).
But, here is the problem. Though policy is fine, when installed, only fw1 sdhows as active and fw is down (same on both clusters). We just assigned 169.254.x.x IPs as sync, since customer wanted to give it IP from same mgmt subnet, but that cannot work.
Oddly enough, pings to sync IP work from both members, but fw2 always shows as down...we tried cphastop; start, cprestart, reboot,. disable/re-enable cluster, no dice.
Worked with TAC, they kept telling us its layer 2 iussue, but I cant really understand how that can be the problem. Client even verified everything on of their Fortigates as well, all is allowed and even he was surprised they were "forcing" layer 2 argument.
Thoughts?
Thanks as always!
Please confirm the version & JHF?
I recall their being limitations with MDPS whilst running in UPPAK mode...
R81.20 jumbo 120
The limitation wasn't resolved until R82+ from memory so likely if MDPS is a requirement you will need to switch to KPPAK (or upgrade) but let me find the reference to confirm.
I will ask the client to confirm, just to be sure.
My apologies it is also fixed in R81.20 JHF.
No need to apologize Chris, all good mate! I emailed the customer, so lets see what he comes back with.
I was mistaken, they are on R82 jumbo 40. Here is what he sent.
Customer emailed me something interesting, which we will check on remote soon. He was wondering if CCP port might not be tied to dplane, which makes sense to me. Will update once we finish remote.
I even had client do below on both firewalls on dplane, rebooted, no change:
Add mdps task port 8116 protocol udp
Quick update...I am trying to replicate this in the lab to see if it can be fixed.
@Chris_Atkinson @Vincent_Bacher @Ilya_Yusupov @Gennady @Bob_Zimmerman
Hey guys,
Just to let you know what I did in the lab. I created brand new R82 cluster with jumbo 60, set up mdps same way customer did, rebooted both members, pushed policy with few rules, cluster showed as active/satandby and even web UI is accessible, with sync interface IPs as 169.254.169.111 and .112, no issues,
Honestly, at this point, Im not sure why this is failing for the client...In a way, I was hoping I would get the same problem, but that did not happen.
O well...
Saga continues : - )
I agree with what Bob has previously said. However are we sure the customer completed the MDPS setup steps as described including the reboot of both members post?
Hey Chris,
Yes, that was done, even TAC was on the zoom remote to verify. Btw, what I did in the lab below is exactly same commands, just replaced mgmt and sync with eth0 and eth1, as its in eve-ng.
set mdps interface Sync sync on
set mdps interface Mgmt management on
set mdps mgmt plane on
set mdps resource cpus 4
set mdps mgmt resource on
Hey Chris,
Happy weekend 🙂
Just for the context, below is what I did as a test in the lab, but I can still access web UI fine and no cluster issue. I will have call with customer Monday, so will 100% compare everything.
backup member:
[Expert@CVH2:dplane]# mplane
Context set to Management Plane
[Expert@CVH2:mplane]# curl_cli -k google.com
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>
[Expert@CVH2:mplane]# ping 9.9.9.9
PING 9.9.9.9 (9.9.9.9) 56(84) bytes of data.
64 bytes from 9.9.9.9: icmp_seq=1 ttl=49 time=19.2 ms
64 bytes from 9.9.9.9: icmp_seq=2 ttl=49 time=18.1 ms
^C
--- 9.9.9.9 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 18.164/18.687/19.210/0.523 ms
[Expert@CVH2:mplane]# cphaprob roles
Please run on dplane context roles
[Expert@CVH2:mplane]# dplane
Context set to Data Plane
[Expert@CVH2:dplane]# cphaprob roles
ID Role
1 Master
2 (local) Non-Master
[Expert@CVH2:dplane]#
[Expert@CVH2:dplane]# clish
CVH2:dplane> show configuration mdp
CVH2:dplane> show configuration mdps
add mdps task port 256 protocol tcp
add mdps task port 257 protocol tcp
add mdps task port 263 protocol tcp
add mdps task port 1129 protocol tcp
add mdps task port 2010 protocol tcp
add mdps task port 5432 protocol tcp
add mdps task port 8989 protocol tcp
add mdps task port 18181 protocol tcp
add mdps task port 18183 protocol tcp
add mdps task port 18184 protocol tcp
add mdps task port 18187 protocol tcp
add mdps task port 18191 protocol tcp
add mdps task port 18192 protocol tcp
add mdps task port 18195 protocol tcp
add mdps task port 18210 protocol tcp
add mdps task port 18211 protocol tcp
add mdps task port 18264 protocol tcp
add mdps task process AutoUpdater
add mdps task process DAService
add mdps task process celery_cluster
add mdps task process clishd
add mdps task process cloningd
add mdps task process confd
add mdps task process confp_docs
add mdps task process cprid
add mdps task process exl_detectiond
add mdps task process gaia_notifier_server
add mdps task process gcopyd
add mdps task process gexecd
add mdps task process httpd2
add mdps task process lb_configd
add mdps task process license_syncd
add mdps task process lldpd
add mdps task process netscoutd
add mdps task process ntpd
add mdps task process rconfd
add mdps task process redis_brokerd
add mdps task process rest_api_docs
add mdps task process rest_api_run
add mdps task process run_confp
add mdps task process sgm_lsp
add mdps task process sgm_pmd
add mdps task process snmpd
add mdps task process snmpmonitor
add mdps task process start_alerts
add mdps task process start_celery
add mdps task process start_redis
add mdps task process stats_streamerd
add mdps task service cpri_d
add mdps task service ntpd
add mdps task service sshd
add mdps task service syslog
add mdps task service xinetd
add mdps task address avupdates.checkpoint.com
add mdps task address cws.checkpoint.com
add mdps task address dl3.checkpoint.com
add mdps task address productcoverage.checkpoint.com
add mdps task address te.checkpoint.com
add mdps task address teadv.checkpoint.com
add mdps task address updates.checkpoint.com
add mdps task address usercenter.checkpoint.com
set mdps interface eth1 sync on
set mdps interface eth0 management on
set mdps mgmt plane on
set mdps resource cpus 1
set mdps mgmt resource on
Special thanks to @Ilya_Yusupov for reaching out offline about this. I will have remote with customer on Monday, show him my lab setup and lets hope we can get this sorted out. Will update after the remote.
Hey boys,
I attached one of cluster members' config from my lab, but mdps config is exactly the SAME on the other member. Maybe one of you gret minds can see something I might be missing. Mind you, in my lab, all works, just not for the customer. I will get mdps part tomorrow from their end, but from what I checked so far, looks 100% right.
Hey guys,
Latest update. Based on my email correspondance with @Ilya_Yusupov , I did match my lab cluster to have 32 CPUs and 4 assigned to mplane (same as customer) and will verify on remote Monday if they have dynamic_split enabled or not. I believe it should be by default, but will confirm, for sure.
Latest update:
Had remote with the customer, compared the config, all looks 100% right, even dynamic_split is enabled by default. Unfortunate thing is since these fws dont have Internet access yet and we wanted to run files TAC asked us for, even that was not possible, since even with policy unloaded, we cant even winscp or filezilla into them, though /bin/bash shell is on. I dont have any of these issues in my lab at all.
I asked TAC about it, lets see what they say.
Even scp does not work? That’s strange.
Tried to create dedicated scp user ?
We did, even tried that, exact same issue.
Then I would increase session log level and check. Maybe anything useful inside
https://winscp.net/eng/docs/ui_pref_logging
Not a bad idea, agree.
And as well same on device side but be careful.
# Temporarily set SSH LogLevel to DEBUG3
vi /etc/ssh/sshd_config
# → set LogLevel DEBUG3
# Reload SSH daemon configuration without disconnecting sessions
service sshd reload
# View live SSH logs
tail -f /var/log/secure
# After debugging, reset LogLevel to INFO
vi /etc/ssh/sshd_config
# → set LogLevel INFO
service sshd reload
My gut feeling tells me all these issues have to do with something network related, I just cant pin point exactly where...
But ssh connection is stable?
Thats right, never an issue with ssh, works whether policy is applied or not.
Then I would actually start debugging with SCP on the client side first, and then proceed on the server side.
Will have a call with my colleague who used to teach CP courses, I want to get his opinion first. Let me see what he says.
Since they want brand new version installed, I will have client upgrade to R82.10, see what happens and we will do remote with TAC if that fails. Will update once done, hopefully tomorrow.
Hey guys,
We got all this working by updating clusters to R82.10. Not sure how that worked, as R82.10 release notes dont mention anything about mdps, but either way, Im so happy it was fine, and customer was very relieved. Web UI is fine, as well as cluster state.
Tx for everyone's help!!
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
| User | Count |
|---|---|
| 56 | |
| 44 | |
| 16 | |
| 14 | |
| 14 | |
| 11 | |
| 10 | |
| 10 | |
| 9 | |
| 8 |
Thu 12 Feb 2026 @ 05:00 PM (CET)
AI Security Masters Session 3: AI-Generated Malware - From Experimentation to Operational RealityFri 13 Feb 2026 @ 10:00 AM (CET)
CheckMates Live Netherlands - Sessie 43: Terugblik op de Check Point Sales Kick Off 2026Thu 19 Feb 2026 @ 03:00 PM (EST)
Americas Deep Dive: Check Point Management API Best PracticesThu 12 Feb 2026 @ 05:00 PM (CET)
AI Security Masters Session 3: AI-Generated Malware - From Experimentation to Operational RealityFri 13 Feb 2026 @ 10:00 AM (CET)
CheckMates Live Netherlands - Sessie 43: Terugblik op de Check Point Sales Kick Off 2026Thu 19 Feb 2026 @ 03:00 PM (EST)
Americas Deep Dive: Check Point Management API Best PracticesAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY