- Products
- Learn
- Local User Groups
- Partners
- More
CheckMates Fifth Birthday
Celebrate with Us!
days
hours
minutes
seconds
Join the CHECKMATES Everywhere Competition
Submit your picture to win!
Check Point Proactive support
Free trial available for 90 Days!
As YOU DESERVE THE BEST SECURITY
Upgrade to our latest GA Jumbo
The 2022 MITRE Engenuity ATT&CK®
Evaluations Results Are In!
Now Available: SmartAwareness Security Training
Training Built to Educate and Engage
MITRE ATT&CK
Inside Check Point products!
CheckFlix!
All Videos In One Space
Good Afternoon,
I plan to add additional 10GB interfaces to an existing bond group in a VSX VSLS cluster. Is there any trick to doing this that may not be obvious?
I planned on gracefully migrating all the VS's to a single cluster member using vsx_util vsls via the management server. Once failed over, I was going to issue a cpstop to the vacated Gateway to shut everything down.
Then, in CLISH run:
add bonding group 0 interface eth1-03
add bonding group 0 interface eth1-04
add bonding group 0 interface eth2-03
add bonding group 0 interface eth2-04
After that, I was planning on rebooting the Gateway given its long uptime. Once it came back up, I was going to verify the cluster integrity with the new interfaces with cphaprob -a if.
Then, rinse and repeat with the other cluster member.
Is there anything else I need to do to make sure this goes as smoothly as possible?
Thanks!
Dan
On the point of adding the interfaces to the bond, you don't even need to do a cpstop, I do think however that you will need to set vsx off before changing anything to your interface settings, make the changes and then set vsx on.
From the VSX perspective make sure the added interfaces are not in your VSX cluster interface list, if so remove them from the list before you start.
So your steps again:
check interfaces in VSX Cluster object
ssh to management
vsx_util vsls - move all to member A
SSH to member B
set vsx off
add bonding group 0 interface eth1-03
add bonding group 0 interface eth1-04
add bonding group 0 interface eth2-03
add bonding group 0 interface eth2-04
set vsx on
save config
go to management
vsx_util vsls - move all to member B
SSH to member A
set vsx off
add bonding group 0 interface eth1-03
add bonding group 0 interface eth1-04
add bonding group 0 interface eth2-03
add bonding group 0 interface eth2-04
set vsx on
save config
go to management
vsx_util vsls - distribute load
all done.
That's a fairly safe approach Dan.
Depending on where your bond is attached in VSX, you may check each bond member state with ether
cphaconf show_bond bond0
or
cat /proc/net/bonding/bond0
Thanks Kaspars Zibarts, I'll check that once we bring everything up!
One question for Tim Hall: We are carrying the majority of the traffic flowing in and out of this VSX Cluster (a few dozen VLANs) from this bonded Interface. When we first configured these Gateways, we enabled multi-queueing to help distribute the processing for these interfaces. Does multi-queueing have to be manually enabled on the Interfaces we add to the bond group? Or will that happen automatically once they join the bond group? My recollection of how all that works is a bit fuzzy!
Thanks in advance!
-Dan
vsx multi-queue bond
Multi-Queue (MQ) is enabled per physical interface, so adding a physical interface to a bonded (ae) interface will not influence the MQ settings for that physical interface. MQ is not even aware of bonds and only concerns itself with physical interfaces. The maximum number of physical interfaces MQ can be enabled for is 5.
That said MQ should not be enabled indiscriminately on an interface, as it causes extra overhead in the SND/IRQ cores as they must now "stick" all packets associated with a particular connection/stream to the same queue every time to avoid out-of-order delivery of frames. Generally if more SND/IRQ cores can be assigned to avoid RX-DRPs without overloading the remaining Firewall Worker cores, doing so is more desirable than enabling MQ. However if there are not enough total available core resources to assign more SND/IRQ cores, or sim affinity -l shows that a single SND/IRQ core dedicated to handling a physical interface's ring buffer is still experiencing >0.1% RX-DRPs, enabling MQ is the right call.
--
Second Edition of my "Max Power" Firewall Book
Now Available at http://www.maxpowerfirewalls.com
Tim,
As always, thanks for your insight on this. After seeing your response, I went back and calculated the per-interface RX-DRP% for each Gateway. Interfaces eth1-01, eth1-02, eth2-01, and eth2-02 are the ones currently in bond1 with MQ enabled. (all eth1 and eth2 line card interfaces are 10GB in a CP 23800 Appliance with R77.30.):
It became pretty apparent to me that these Gateways are not suffering from RX-DRP issues whatsoever! Given that the maximum interfaces MQ can be enabled on is 5, and that we would end up with a total of 8 interfaces in the bond group after this change is made, do you consider it more advisable to remove MQ entirely from this configuration? We never experienced any performance / CPU issues that had us enable MQ in the first place. Someone merely suggested we enable MQ when these Gateways were built new.
If it doesn't appear to be solving any problems, and reduces our overall configuration complexity, it seems to me like it may make more sense to disable it. What do you think?
Regards,
Dan
In my book I mention a goal of having RX-DRP be < 0.1%. You are well beneath that, so I'd say disable MQ. Assuming RX-DRP's remain below 0.1% leave MQ off. No point in increasing configuration complexity if you can help it.
--
Second Edition of my "Max Power" Firewall Book
Now Available at http://www.maxpowerfirewalls.com
All this discussion prompted me to re-read Chapter 7 yesterday, and that was pretty much my takeaway! I'll plan on shutting down MQ when I add the extra interfaces. Thanks for taking the time to look over my data and weigh in! Its always reassuring to have a second opinion!
Thanks to everyone who replied in this thread. I think I've got a solid game plan now.
Thank You,
Dan
On the point of adding the interfaces to the bond, you don't even need to do a cpstop, I do think however that you will need to set vsx off before changing anything to your interface settings, make the changes and then set vsx on.
From the VSX perspective make sure the added interfaces are not in your VSX cluster interface list, if so remove them from the list before you start.
So your steps again:
check interfaces in VSX Cluster object
ssh to management
vsx_util vsls - move all to member A
SSH to member B
set vsx off
add bonding group 0 interface eth1-03
add bonding group 0 interface eth1-04
add bonding group 0 interface eth2-03
add bonding group 0 interface eth2-04
set vsx on
save config
go to management
vsx_util vsls - move all to member B
SSH to member A
set vsx off
add bonding group 0 interface eth1-03
add bonding group 0 interface eth1-04
add bonding group 0 interface eth2-03
add bonding group 0 interface eth2-04
set vsx on
save config
go to management
vsx_util vsls - distribute load
all done.
You're right... I think you do need to do "set vsx off" and then "set vsx on" when making these types of changes. Thanks!
Interesting. Seemed to work just fine for us without turning vsx off. We did it on the "fly" - add on standby member, fail over and do the other standby. But we were a bit of cowboys
You only need to set vsx off when you need to change other interface settings (the MQ settings??).
What I did not try though is if you add a space character at the beginning of the line if that then still works, as it does in a cloning group configuration.
I know that you can turn VSX mode off to regain access to the GAIA WebUI. So, if you were looking to make these kinds of changes through the WebUI and not clish, you'd have to turn VSX mode off. (I'm also not suggesting that method is officially supported by CP. It is just something I've observed while working with VSX!)
Hi All,
I am running in a similar situation where we want to reconfigure a bonding group that now contains 8x 1Gbps links. We are planning to add 2x 10Gbps SFP's to that bonding group and offcourse remove the 8x 1Gbps interfaces from that group. This environment is based on VSX (R80.10).
Could someone please explain the following to me?
1. Is the answer provided by Maarten Sjouw supported by Check Point?
1.1 Is there a SK describing this?
2. Why is the "set vsx off/on" command issued?
3. What would happen to the affinity settings if we add the 2x 10Gbps modules?
The only SK i discovered and was recommended to me by support is sk69180 but this is not VSX "specific" and is updated more than 2 years ago when R80.10 was not yet released...
Thanks in advance.
Kind regards,
-J
1 Yes it is.
1.1 here you have 2 examples I cound find in a couple of minutes:
From SK101165
Remove the problematic route:
HostName:0> set vsx off
HostName> set virtual-system 0
HostName> set static-route default nexthop gateway address 192.168.1.254 off
HostName> set vsx on
HostName:0> save config
From SK92425:
HostName:0> set vsx off
HostName> show vsx
VSX Disabled
HostName> set static-route default off
HostName> delete interface eth0 ipv4-address
HostName> set interface eth0 ipv4-address 192.168.0.1 mask-length 24
HostName> set static-route default nexthop gateway address 192.168.0.4 on
HostName> set vsx on
HostName:0> show vsx
VSX Enabled
HostName:0> save config
2. The set vsx on command is issued to lock interface and routing configuration from clish and to disable the WebUI.
3. I don't know maybe Tim can tell us?
First of all, thanks for the provided answers.
After reviewing the SK's you provided i am not really convinced that this is the way to go... Are we not talking about different kinds of situations here?
Is the "set vsx off" command really needed for editing the bonding configuration? Looking at SK92425 (important note) i would say yes... but why is this not described in SK69180?
Also, SK101165 seems to be outdated and not relevant to R80.10 environments.
SK69180 - Seems to be the most accurate for this situation (But also outdated) at the moment and i don't see the "set vsx off" listed anywhere...
Edit:
In the mean time, I simulated this in my lab and it also worked as SK69180 described. This was also mentioned by Kaspars Zibarts in a previous reply.
I think it is most likely an omission error from the SK articles mentioned. I also think the "set vsx off" command is a little misleading because to the average observed, it seems like you are literally disabling VSX once that command is issued. Which, obviously, isn't the case.
Like Maarten said, it is more just a way to "unlock" the GAIA configuration to allow you to make changes to it. We run a lot of VSX, so I would love to see the day where you could just access the WebUI in VSX the same way without having to issue separate commands to "disable" VSX mode. I can also play devil's advocate here and understand why that one extra step may be an ounce of prevention to keep people from tinkering with settings they may not fully understand the ramifications of changing in a production VSX environment.
Eventually, i had to use the "set vsx off" command to edit my interface configuration and enable the new interfaces. Also, i had to use this command to add these interfaces to a existing bond.
So it looks like Maarten was indeed right!
Regards,
Jelle
About CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY