- Products
- Learn
- Local User Groups
- Partners
- More
Quantum Spark Management Unleashed!
Introducing Check Point Quantum Spark 2500:
Smarter Security, Faster Connectivity, and Simpler MSP Management!
Check Point Named Leader
2025 Gartner® Magic Quadrant™ for Hybrid Mesh Firewall
HTTPS Inspection
Help us to understand your needs better
CheckMates Go:
SharePoint CVEs and More!
I am upgrading the hardware of a cluster made of two open server gateways. The manager has a license of 10 gateways and it manages 10 gateways.
Is it possible to have a cluster made up of two gateways with different hardware?
So what process would you recommend for the migration?
I was thinking three options:
1) shutdown one of the old gateways and connect one of the new gateways with the configuration of the old gateway, establish SIC, push policies and failover. Finally repeat the process for the second old gateway.
Is this possible as we will have a cluster made up of gateways with different hardware?
2) Add the 2 new gateways with new ip address (the cluster will be made up of 4 gateways at this stage) , failover to them and shutdown the old gateways.
Is it possible, as we will have 12 gateways and we have a license only for 10 gateways.
3) shutdown the old gateways. Connect the new gateways, establish the SIC, push the policies.
This is the less preferred procedure as it will require outage.
You can ask CP for temporary licenses that will allow you to manage more gateways. I am sure they will accommodate.
I would go with option 1. Depending on the release you are running you actually can achieve seamless upgrade. Check for "ClusterXL upgrade methods and paths" sk107042.
HW wise (assuming you are on fairly recent SW release like R77.30 or R80.10) - it all depends on if you use CoreXL. If you do and are changing number of FWK instances then connection sync is not possible. Also you have to take care of interface naming (if you use open servers, you can keep the same interface names making life easier)
The easiest is to keep the same SW level with new members but it is also possible to upgrade to newer version during the process fairly seamlessly with latest SW releases
We have gone from appliances to chassis, VSX gateway (downgrading from R77.30 > R76) with a single ping packet loss
Too many questions to give you exact answer but it's not complicated
Good luck!
One comment that I forgot - you can allow "out of state" connections for cutover time - this way you can minimise the outage. Once you have built the second member and just before pushing policy set to allow OOS connections. Then the failover to the new member will be less noticeable. Then re-instate it by pushing policy again once running on the new firewall.
Here are the steps, not in absolute detail but gives enough to tweak them to your requirements. They have been tested these on four different clusters HW+SW upgrades with one ping loss.
Assumption is that interface names do not change and you use the same physical cables to the switches! You will need to take extra steps to amend those if they do. I have excluded obvious steps like backups
Preparation:
Start of the upgrade:
FW2 upgrade:
FW1 upgrade:
Finalise:
Check sim affinity for SecureXL
And that's it - go and enjoy your beer!
ROLLBACK
Connect all cables back to old firewalls.
Connect with SSH and run cpstart on both
Enable stateful inspection again in global properties.
Reset cluster object ClusterXL active member to the original setting
Set to push policy to both cluster members
Check and update appliance version in GUI
Push policy
Thanks Kaspars, I hadn't thought of the OOS. Good idea.
I was thinking about your vsx implementation. I was how would you design the network interfaces. In a checkpoint cluster you typically have three separate type of interfaces: cluster interfaces, non-monitored private interfaces and sync interfaces.
Did you keep these interfaces separated in a VSX invironment where you have your VSX gateway in two separated physical boxes? I guess that in a VSX environment it is still a good idea to have separated physical interfaces for syn, cluster and non-monitor (mgmt) if it is possible
Hey Luis, I'm not entirely sure if I understood your question about VSX. Typically I have slightly different approach with VSX HW+SW upgrades:
LAB part
PROD part
ROLLBACK
That's to give you an idea of approach that I have been using for years now. I mean you will need a lot of small tweaks to handle your environment.
Again you can always PM me
Merry Xmas!
It is about the network interfaces. With a physical appliance you would typically have dedicated interfaces for mgmt, for sync and then cluster interfaces with multiple vlans for data. I was wondering if you keep that design with dedicated interfaces or you end up with sync, mgmt and data on the same trunk in a VSX environment.
Hi Kaspars, to sens you a PM I need you to follow me.
Hi Kaspars,
We don't have a LAB environment and we are planning on upgrading our VSX cluster from 77.20 to R77.30. However, we also need to replace the CP 4200 GW appliances with 4800's. We have bonding (bond0) configured with interfaces that have different names between 4200 and 4800. We are running MDS R80.10 and I was thinking:
My main concern is the bond that needs to be replaced with another bond that has different interface names between the two appliances.
George
Actually it's easier than that if the only interfaces in use are Mgmt, Sync and bond0 (being the only production interface)
You don't need to rename it as part of upgrade as management server does not care what interface names forms the bond. For example:
OLDGW: bond0 = eth1+eth2
NEWGW: bond0 = eth1-01 + eth1-02
you don't need need to worry about eth1 > eth1-01 and eth2 > eth1-02 change as that's invisible to VSX object, it only "sees" bond0
Otherwise it should work! Good luck
Aaaah,
So just remove the interfaces that do not reflect the new GW and add those that are missing then do a vsx_util reconfigure.
Can I do cphacu start to move traffic over or just do a cpstop on the old GW and traffic should failover to the new GW?
George Dellota
Not too sure if I understood you correctly so might be easier if you sent a screenshot of your VSX object physical interfaces.
For example here
if interface names inside bond1 and bond2 changed on the new appliance, it would not matter. Don't need any special steps during upgrade (vsx_util change_interfaces)
But it would matter if eth2-0x interface names changed, then you would run vsx_util change_interfaces just as you described.
BTW, I haven't had time to dig into it, but for us we had to run vsx_util change_interfaces command on the same interface twice, despite the fact it said execution completed successfully first time round. I discovered it accidentally by searching for old interface name after I run command first time and found some references still present in the DB. Running command second time actually "fixes" it. I wish I had an answer for it. It even says the second time that previous run has not fully completed - do you want to complete, answer yes.
Just remember to back up your mgmt before you start for easy rollback!
Okay, I understand what you mean about the interfaces within bond0. So this is how our current 4200 appliance interfaces looks like:
When I replace the cluster with a 4800, I’ll configure each GW with bond0 and add eth1 and eth2.
I would have to do a vsx_util change_interfaces(twice) to delete eth1-02, eth1-03 and eth1-04 and add new interfaces eth4-eth7 and then do a vsx_util reconfigure to “sync” the MDS and GW settings.
I uncheck “Drop out of state TCP packets” in the global properties before the swap of OLDGW2
Do a hard cut on the running OLDGW2 and traffic (hopefully) fails over to NEWGW1 and redo the previous steps on NEWGW2.
Sounds about right?
sounds about right!
Hi Opal,
If you replace a 12600 with a Check Point Open Server the interfaces names of the two appliances might be different for example the Check Point might use modules eth1-01 / eth3-02 etc. What will happen mid way through when you install the new standby, move cables from existing cluter eth1-01 to eth3(openserver), establish SIC. When you now go into topology table and "get interfaces" will it fetch the new eth3? This will also now mismatched to the active member.
Clustering is only supported with identical hardware.
You should be able to get a temporary license from either UserCenter or your Check Point SE to support the management of additional gateways.
Thanks Dameon,
absolutely it makes sense to support clustering only with identical hardware. But what about when the open server require a hardware upgrade? I guess that checkpoint supports or it should support at least one procedure, right? Is there any other procedure that checkpoint would recommend?
1) may not be ideal but I haven't being able to come up with anything better. I think that 1) may be better in terms of minimizing the service outage and also providing a easy/quick rollback if required.
Sync won't work (or could potentially have unexpected behavior) unless the CPUs in the different systems are identical.
Assuming they are different, then the only way to swap things out with minimal disruption is to temporary disable the "Drop Out of State" options before the gateways are physically swapped.
You would disable this before swapping and leave it set for maybe 24 hours afterwords to allow long-standing connections to re-establish.
Similar to the following thread on CPUG: Zero downtime upgrade?
Note: This setting is not recommended long-term as this reduces the overall security of your gateways.
For TCP/ICMP, they are set in the Global Properties as shown below.
For UDP, refer to the following SK (note it's an "Expert" level SK, so you may not have access): How to configure the Security Gateway to drop Out of State UDP packets
Hi, Daemon
R80.10 is not described in Version of the SK.
Is this solution available in R80.10, too?
Takumi,
It should work the same in R80.10
Thank you.
I am glad if you can add it to this sk.
You're welcome to leave feedback on the SK to this effect.
I did spot check this particular Global Property is available via guidbedit in R80.10 (fw_drop_out_of_state_udp).
Thank you.
I will try it.
Regards,
Hello,
Resurrecting this thread to ask a question:
Can you say if zero downtime upgrade would be possible for R80.30 13500 to 15400 cluster appliance only migration. Both seem to show 16 cores when using cpview.
Also referencing the solution provided in thread below I ask if you could clarify the procedure for minimal downtime.
Thanks.
RS
Hello Phoneboy,
is this still valid?
i mean, hardware replacement between devices with same CPUs will handle the failover from old to new hardware with no disruption like normal Cluster Failover? other question, "same CPU" you mean number of core or number of CoreXL/SecureXL ?
thanks
ClusterXL is only supported with identical hardware for all customer members.
It may work in situations where the hardware is "close enough" (same core count and snd/worker config).
This is not guaranteed and hasn't changed.
The alternate method I described changing the "out of state" configuration is not foolproof...and comes with a security risk.
I guess the same applies to different generation of the same platform? For example Proliant DL360 GEN9 to GEN 10 ? or ibm x650 m4 to m5?
Same core count and different processors can lead to unexpected behavior, right?
So better to go for minimum downtime than MultiVersion Cluster?
Yes, it applies there as well.
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
User | Count |
---|---|
9 | |
7 | |
6 | |
5 | |
5 | |
5 | |
5 | |
4 | |
4 | |
4 |
Wed 10 Sep 2025 @ 11:00 AM (EDT)
Quantum Spark Management Unleashed: Hands-On TechTalk for MSPs Managing SMB NetworksFri 12 Sep 2025 @ 10:00 AM (CEST)
CheckMates Live Netherlands - Sessie 38: Harmony Email & CollaborationTue 16 Sep 2025 @ 02:00 PM (EDT)
Securing Applications with Check Point and AWS: A Unified WAF-as-a-Service Approach - AmericasWed 17 Sep 2025 @ 04:00 PM (AEST)
Securing Applications with Check Point and AWS: A Unified WAF-as-a-Service Approach - APACWed 17 Sep 2025 @ 03:00 PM (CEST)
Securing Applications with Check Point and AWS: A Unified WAF-as-a-Service Approach - EMEAWed 10 Sep 2025 @ 11:00 AM (EDT)
Quantum Spark Management Unleashed: Hands-On TechTalk for MSPs Managing SMB NetworksFri 12 Sep 2025 @ 10:00 AM (CEST)
CheckMates Live Netherlands - Sessie 38: Harmony Email & CollaborationTue 16 Sep 2025 @ 02:00 PM (EDT)
Securing Applications with Check Point and AWS: A Unified WAF-as-a-Service Approach - AmericasWed 17 Sep 2025 @ 04:00 PM (AEST)
Securing Applications with Check Point and AWS: A Unified WAF-as-a-Service Approach - APACWed 17 Sep 2025 @ 03:00 PM (CEST)
Securing Applications with Check Point and AWS: A Unified WAF-as-a-Service Approach - EMEAAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY