Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Nick_Doropoulos
Advisor

CoreXL questions

Hello,

I was wondering if I could have the following questions answered please:

1) When unbinding a CPU core from a firewall worker so it can be bound to the dynamic dispatcher, the active connections will be lost. Is there a way for this to happen seamlessly or would this change need to done during a maintenance window?

2) What is the maximum number of CPU cores that can be assigned to the dynamic dispatcher?

3) What happens when bonded interfaces are added into the mix?

Thanks in advance.

0 Kudos
3 Replies
Wolfgang
Authority
Authority

Nick,

first of all please a look at sk105261, this is a very good Description what Dynamic dispatcher does.

To your questions:

1. If you unbind a Core from fw_worker it can‘t be used from dynamic dispatcher. Dynamic dispatcher works for all cores with enabled fw_workers. If you change your core distribution you need a maintenance window.

2. Dynamic dispatcher uses all defined cores

3. Not really understand your question with adding bond interfaces. Maybee you are meaning something other then dynamic dispatcher, maybee you are talking about the SND ? Can you please explain more detailed your question.

wolfgang

Timothy_Hall
Legend Legend
Legend

Q #1 - On a non-VSX system changing the split of SND/IRQ cores to Firewall Worker (kernel instances) requires a reboot.  On a VSX system or a R80.10+ firewall with the new USFW feature enabled, the split can be changed dynamically.  I'm not sure what happens to the active connections on a VSX/USFW firewall when the split is dynamically changed, but my guess is that those connections can't be migrated to surviving fwk kernel instances.

Q #2 - The Dynamic Dispatcher mechanism itself is part of an SND/IRQ core's function (along with SecureXL and SoftIRQ), and it tries to keep the Firewall Workers evenly balanced based on their monitored CPU load.  I don't think there is really a limit to the number of SND/IRQ cores you can assign for your split as long as there is at least one IPv4 Worker Core, however in R80.10 and earlier having more than six SND/IRQ cores doesn't give nearly the linear increase in performance as it did for the first six cores added due to the increased locking and coordination overhead required among so many SND/IRQ cores.  This limitation was resolved in R80.20+.

Q #3 - One of the big functions of a SND/IRQ core is to empty the ring buffers of the individual physical interfaces via the SoftIRQ mechanism.  Bonding interfaces does not change this behavior at all, however unless Multi-Queue is enabled a single physical interface's ring buffer can only be emptied by a single SND/IRQ core no matter how many of them you have.  If Multi-Queue is enabled on an interface, multiple SND/IRQ cores are allowed to empty the ring buffer(s) of a single interface and help avoid RX-DRPs (buffering "misses") when a ring buffer is full because it didn't get emptied fast enough.

You may want to check out my Performance Optimization TechTalk at the link below which covers a lot of this:

https://community.checkpoint.com/t5/General-Topics/TechTalk-Security-Gateway-Performance-Optimizatio...

 

 

 

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
Nick_Doropoulos
Advisor

Thanks gents.

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events