Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Václav_Brožík
Collaborator
Jump to solution

Relation between SecureXL connection state and ClusterXL synchronization (R80.40)

I am not able to find the information in the documentation. Please point me to the right place if it is there.

How is the state information of the SecureXL layers (SecureXL, SND) synchronized between cluster members?

A suppose the following is happening:

  • State information of a fully accelerated connection (fast path) is transferred to the fw workers layer after a delay.
  • Only the state information from the fw workers layer is synchronized between cluster members. I.e. the state Information accessible using fwaccel tab is not syncronized directly.
  • I guess that some information like "from which SND CPU the state is" is not important at the fw workers layer anymore.

Namely I would like to know if changing the number of Multi-queue CPUs (CPUs left for SND: fw ctl affinity -s -d -fwkall n) will disrupt the ClusterXL synchronization or not. It is R80.40, VSX VSLS with multi-queue.

Thank you.

0 Kudos
1 Solution

Accepted Solutions
Timothy_Hall
Champion
Champion

Val is correct, nothing specific about SecureXL or other operations on the SND cores is synchronized between the members, beyond what tables are stored on the Worker/INSPECT instances that are relevant to SecureXL's operation.  This can be easily seen by joining a 13500 and 13800 to the same ClusterXL cluster.  Even though they have a different number of total cores, as long as you configure the same number of INSPECT instances on both they will sync up and work even though the 13800 will have more SND cores.  Note that while this does work it is not officially supported.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com

View solution in original post

11 Replies
_Val_
Admin
Admin

Tables used for acceleration are not synced, only FW kernel tables. However, CoreXL settings: amount of FWKs and SND, should be identical on two cluster members.

VSX is using its own mechanism to assign FWKs CPUs to VSs, which is in fact controlled from VS object. So, my question is, what are you trying to achieve?

0 Kudos
Václav_Brožík
Collaborator

The forum fails to add my reply. I will try to post it in parts:

@_Val_ wrote:

Tables used for acceleration are not synced, only FW kernel tables.


There is even no indirect synchronization of fast-path connections? I never thought that a cluster failover terminates all fully accelerated connections. Unfortunately I have never explicitly tested this.

 

0 Kudos
_Val_
Admin
Admin

SecureXL is reporting those connections to FW. FW kernel tables are synced. This is done to cover cases when one of the packet in this accelerated connection is not okay, or in case data stream is needed.

So the situation is, although SXL operations and decision are not synced, opened connections are synced through FW tables. 

Timothy_Hall
Champion
Champion

Val is correct, nothing specific about SecureXL or other operations on the SND cores is synchronized between the members, beyond what tables are stored on the Worker/INSPECT instances that are relevant to SecureXL's operation.  This can be easily seen by joining a 13500 and 13800 to the same ClusterXL cluster.  Even though they have a different number of total cores, as long as you configure the same number of INSPECT instances on both they will sync up and work even though the 13800 will have more SND cores.  Note that while this does work it is not officially supported.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
Václav_Brožík
Collaborator

> VSX is using its own mechanism to assign FWKs CPUs to VSs...

We are not talking about changing the number of fwks. I know that this change disrupts the synchronization.

I am going to change the number of multi-queue/SND CPUs. With the recommended default configuration of multi-queue you do this by changing the number of CPUs reserved for fwks: fw ctl affinity -s -d -fwkall n

Decreasing the n increases the number of multi-queue CPUs and vice versa. After reboot the change is applied to multi-queue. As you wrote, on VSX this change does not affect the number of fwk instances.

0 Kudos
_Val_
Admin
Admin

That said, it should not cause a failover in VSX mode. If you want to be on a safe side, you can force a member down (clusterXL_admin down) and run the command, then bring it up, and repeat on the second one.

0 Kudos
Václav_Brožík
Collaborator

Thank you for your replies. I am not afraid of unexpected failovers. I would like to know if the nodes with different number of multi-queue CPUs will synchronize correctly.

In other words: Whether we should expect interruption of established connections after we perform a failover between members with different number of multi-queue CPUs. (The number of fwk instances will not change.)

0 Kudos
_Val_
Admin
Admin

No. Number of CPUs queuing to a single interrupt (actually a process) will change, but not the number of interrupts/processes. 

You did not answer to my question why do you need it in the first place. Also, changing affinity to FWK has nothing to do with SXL

Václav_Brožík
Collaborator

OK, thank you. You answers confirmed and completed what I clumsily tried to write in my first post.

 

To answer your question: We are just doing a standard procedure of re-adjusting our original CPU division between FWKs and SNDs according to the current and expected future traffic patterns.

_Val_
Admin
Admin

And I am saying, it is unnecessary with R80.40 VSX that you are using.

0 Kudos
Václav_Brožík
Collaborator

Could you please be more specific? Why do you think that the threat of overloaded SecureXL/SND CPUs does not matter?

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events