- Products
- Learn
- Local User Groups
- Partners
- More
Quantum Spark Management Unleashed!
Check Point Named Leader
2025 Gartner® Magic Quadrant™ for Hybrid Mesh Firewall
HTTPS Inspection
Help us to understand your needs better
CheckMates Go:
SharePoint CVEs and More!
One of our customers has a failure in the hardware of a ClusterXL gateway. He wants to use a new server HP DL 380 G10 with an old server HP DL 380 G8. The idea now is to use a new 3.10 kernel on the G10 server.
Now the question?
Is it possible to use a 2.6 kernel on the HP DL 380 G8 and a new kernel 3.10 on the HP DL 380 G10 in a ClusterXL?
Check Point's ClusterXL Admin Guide says:
Important! The hardware for all cluster members must be exactly the same, including:
So you can't configure a Cluster consisting of two different servers. At least not one that is supported.
CUT>>>
Important! The hardware for all cluster members must be exactly the same, including:
CPU
Motherboard
Memory
Number and type of interfaces
<<<CUT
Thank you, I know. But had already received other statements in tickets from Check Point.
My feeling was more like no!
Hi @Danny,
Did you test that?
Does it work without any problems?
The customer also orders a new second G10 server. We would only do this for a few days.😀
I think the ClusterXL version should be identical on 2.6 kernel and 3.10 kernel. Thus, the requirements for a cluster should be met.
The interesting question is the support of this combination.
If need be, I'd rather have a statement here from Check Point.
The admin guide clearly states 'the hardware must be exactly the same‘ and yours is a G8 and a G10 server, so I think it‘s clear that this wouldn‘t be supported.
Like I said, I know that.
We just need to bridge a few days until the new server arrives:-)
Ok, then test it out and let us know.
I'm going to try this in a maintenance window:-)
Just has to run until the new hardware arrives.
CUT>>>
Important! The hardware for all cluster members must be exactly the same, including:
CPU
Motherboard
Memory
Number and type of interfaces
<<<CUT
Regardless of this request we could discuss these parameters in another post.
From my point of view only a few parameters are technically interesting in a ClusterXL environment:
CPU -> Equal number of cores -> Sync core number (CoreXL) of a session from one session tabel (active gw) to the standby sessin table (standby gw) => ok!
Memory -> should be equal => ok!
Network Interfaces -> same number of used interfaces in cluster => ok!
Motherdoard -> ??? why should it be the same?
But as I said, that should be discussed in another post.
Please pay attention to the word: including.
Check Point posted just a short listing of hardware devices, not a complete list.
Heiko as long as you have all those configuration elements matching (indicated with ok!) my gut is that it will work fine as I don't see how ClusterXL would be able to freak out over any other differences. Definitely not supported though...
@HeikoAnkenbrand and all, motherboard is important because of chipset. Most importantly, the proposed configuration will not be supported.
To the topic starter, as mentioned above, you cannot run such a cluster.
Hi Timothy
I've a cluster on two identical Dell power edge servers running R80.30 on 2.6. I want to upgrade them to use R80.30 3.10. During the upgrade window which will likely last few days, the cluster members will be running different OS kernel version. I don't think this will be an issue but given the current context with COVID 19 if things go south most likely i'll follow too 🙂
I don't think the cluster members will be able to sync at all with the different kernels but I'm not sure. You may want to temporarily uncheck "Drop out of state TCP packets" on the Stateful Inspection screen of Global Properties until everything is fully upgraded. Doing this will help blunt the impact of any non-stateful failovers that occur.
I'm desperately in need for multi queue so i'll give it a try a report back. Thanks for the tip about out of state TCP packets that's a very good point.
I forgot to mention , to make matter worse, atm my cluster is running on VmWare and during the upgrade one member will run on VmWare while the other one is re-imaged on bare metal. ClusterXL may go mental
Anyway we only live once 🙂
No, not supported, will not work.
There is a difference between not supported and will not work. I can do with a cluster not supported for few days. Now if the cluster doesn't work that's a different story
1. Not supported.
2. Will not work.
How does it help, exactly? G8 and G10 have different amount of cores. R80.40 or otherwise, we cannot have a cluster, if amount of cores is different on two cluster members.
Well, technically, you can, but one of the members will always in Ready state. No HA, no Sync.
After some internal discussion with R&D:
1. The main issue in hands is the different number of CPU cores in G8 and G10 models.
2. State sync doesn't work with different configurations and specifically, the different number of cores. That means, it is impossible to have a healthy cluster in your case, regardless of kernel version.
3. We have multi-version cluster support with R80.40. If the number of cores is the same on both servers, you can form a healthy cluster with different kernel versions in R80.40.
4. The whole issue is state sync. As mentioned in another comment, you still can run the cluster, but the state will be Active/Down on one and Ready/Active on the second member. Sync will not work, and in case of a failover, there will be a traffic cut: all connections will have to be reinitialized. If this risk is acceptabe for you, you can go this way, as a temporal emergency measure only, before you can fully recover/replace the faulty model. This by all means is not recommended as a long term solution.
Many thanks to @Gera_Dorfman and @Dorit_Dor for looking into this.
Or you could just disable the extra cores on the G10 temporarily?
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
User | Count |
---|---|
19 | |
12 | |
8 | |
7 | |
6 | |
6 | |
6 | |
4 | |
4 | |
3 |
Wed 17 Sep 2025 @ 04:00 PM (AEST)
Securing Applications with Check Point and AWS: A Unified WAF-as-a-Service Approach - APACWed 17 Sep 2025 @ 03:00 PM (CEST)
Securing Applications with Check Point and AWS: A Unified WAF-as-a-Service Approach - EMEAThu 18 Sep 2025 @ 03:00 PM (CEST)
Bridge the Unmanaged Device Gap with Enterprise Browser - EMEAThu 18 Sep 2025 @ 02:00 PM (EDT)
Bridge the Unmanaged Device Gap with Enterprise Browser - AmericasWed 17 Sep 2025 @ 04:00 PM (AEST)
Securing Applications with Check Point and AWS: A Unified WAF-as-a-Service Approach - APACWed 17 Sep 2025 @ 03:00 PM (CEST)
Securing Applications with Check Point and AWS: A Unified WAF-as-a-Service Approach - EMEAThu 18 Sep 2025 @ 03:00 PM (CEST)
Bridge the Unmanaged Device Gap with Enterprise Browser - EMEAThu 18 Sep 2025 @ 02:00 PM (EDT)
Bridge the Unmanaged Device Gap with Enterprise Browser - AmericasMon 22 Sep 2025 @ 03:00 PM (CEST)
Defending Hyperconnected AI-Driven Networks with Hybrid Mesh Security EMEAAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY