- Products
- Learn
- Local User Groups
- Partners
- More
Check Point Jump-Start Online Training
Now Available on CheckMates for Beginners!
Why do Hackers Love IoT Devices so Much?
Join our TechTalk on Aug 17, at 5PM CET | 11AM EST
Welcome to Maestro Masters!
Talk to Masters, Engage with Masters, Be a Maestro Master!
ZTNA Buyer’s Guide
Zero Trust essentials for your most valuable assets
The SMB Cyber Master
Boost your knowledge on Quantum Spark SMB gateways!
As YOU DESERVE THE BEST SECURITY
Upgrade to our latest GA Jumbo
CheckFlix!
All Videos In One Space
Recently I have setup the checkpoint firewall 5400 series Gaia R80.10 in cluster environment. Where I have to configure the ISP redundancy in load sharing mode. But after it goes on live, we have faced the high CPU utilization issue, some traffic has been dropped without hitting in policy, first packet isn't sync packet etc issues.
I have configured the ISP redundancy with reference of R77.30 but I don't even find the any guide and documentation for the ISP redundancy in R80.10.
My question is:
-does anybody implemented the ISP redundancy in R80.10?
-If checkpoint doesn't provide any documentation for that, is it supported or not in R80.10?
Thanks,
Manoj
The steps for configuring ISP Redundancy haven't changed in many versions, and yes it's supported in R80.10.
If you want to verify you've configured it right for R80.10, refer to the ClusterXL Admin Guide: ClusterXL R80.10 (Part of Check Point Infinity) Administration Guide
That said if you're experiencing High CPU, you might want to engage with the TAC.
The high CPU being experienced is almost certainly related to the fact that all traffic on the firewall will go F2F (i.e. no acceleration) when ISP Redundancy is configured in Load Sharing mode.
--
Second Edition of my "Max Power" Firewall Book
Now Available at http://www.maxpowerfirewalls.com
Hi Tim,
In R80.20 also we will see same behavior?
ISP redundancy configured in load sharing, all traffic takes F2F path and not accelerate the packet. we have firewall and VPN blade and most of the traffic is for VPN.
Thanks.
There were some pretty significant changes to the architecture of SecureXL in R80.20.
I don't know if they are enough to overcome this specific limitation.
In my opinions, features which disable acceleration should be eliminated by Check Point. It is not possible to work today without SecureXL. sending traffic to F2F path due to limitations in specific feature is like going 10 years back.
I tried to use ISP Redundancy in load sharing and had to turn it off after a while due to:
1. It is not working with PBR or even simple NAT rules, which means that there is no way to control which traffic will go through which link (Please don't point me to SK's or TAC, I tried both)
2. The performance was too high
We are definitely working to reduce the situations where packets go F2F.
Curious, did you try it in R80.20?
Are there any changes to ISP redundancy behavior in R80.20?
There are some significant changes to SecureXL in R80.20.
This could impact ISP Redundancy positively.
Thanks for the tip Deamon,
As I pointed out, ISP redundancy in load sharing disables SecureXL.
Is there a way to know what are the changes in SecureXL in R80.20 (there is nothing about it in sk122485)?
SecureXL has been radically improved in R80.20 to support the Falcon cards, easily the biggest change to SecureXL since CoreXL was introduced in R70. I've been a bit quiet on this topic even after my trip to Israel, mainly because I'm still making sure I have everything straight in my head about how it works now before spouting off. As you noted the documentation for SecureXL in R80.20 is still bit sparse; the Acceleration team is probably focused on getting the Falcon accelerator card into GA rather then writing documentation, at least for the moment. 🙂
--
Second Edition of my "Max Power" Firewall Book
Now Available at http://www.maxpowerfirewalls.com
Thanks for the information Tim,
I would like to hear and learn more about the newly expected acceleration cards and understand how they improve performance. I am in doubt that they can solve the ISP redundancy and acceleration issue but let's see.
Please don't underestimate in documenting feature, a good documentation, "how to?'s", training and sometimes marketing can make the difference between a good and great feature/product.
I wonder why is this limitation? How is LS different from HA from SecureXL point of view (other than the default route is already known in advance)? As far as I know outgoing connections are sticky to an ISP link so in a way it is the same as HA.
Good Question Hristo,
Dameon Welch-Abernathy can you point this question to R&D?
Will ask if R80.20 makes a difference here (which is the real question)
when you work with ISPR LS mode , the connection is not accelerated , this is a limitation in all versions .
this is probably the reason you get the high cpu .
in case you use ISPR in HA mode the connection will be accelerated ,
Checked with my sources in R&D, this is still a limitation in R80.20 as well.
Thanks for checking this Dameon. I think that we all agree that features which disable acceleration (F2F) are almost impossible to use in production environments.
About R80.20, I am very curious about the acceleration improvements and the new Falcon cards. I would like to get more information about it in the community. Are the cards targeted to release at CPX?
Much of SecureXL was moved into userspace, which will allow better scalability on larger machines.
Can't say for sure when the cards will release, but we have an EA program for them which you are welcome to participate in
Couldn't agree more. Using more than one ISP in most cases make it mandatory to use SecureXL. I really hope CP overcomes technical difficulties and implement this.
About CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY