Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
MarcP
Participant

Reset CoreXL to default

Have recently begun supporting a cluster which the previous admin had changed the default config of CoreXL, i.e. all 4 Cores are now operating as both SND and FW workers. Would like to reset this to default settings, any idea how this can be accomplished?

Assuming a clean install would accomplish this but it seems a bit drastic.

Wondering if disable and enable of CoreXL might do it?

thoughts/suggestions?

Thanks in advance!

0 Kudos
14 Replies
PhoneBoy
Admin
Admin

You should be able to change the split via cpconfig.
The split for a four core should be 1/3 (one SND, three worker).

0 Kudos
MarcP
Participant

cpconfig will allow setting the # of workers, how do I reduce the SNDs currently configured?

When I examine the current configuration in cpconfig it tells me there are already 3 workers configured.

 

 

0 Kudos
PhoneBoy
Admin
Admin

The instances you don't configure for workers will become SNDs.
Which means: if you say three workers, you'll get one SND.
A reboot will be required.

0 Kudos
MarcP
Participant

Thanks Dameon, I have tried this and still have the same result.

Let me backup a bit and explain my thinking. while reviewing the health check output for this cluster I see the following:

SND/FW Core Overlap - WARNING
Cores detected operating as both fw workers and SNDs. Please review sk98737 and sk98348 for more information.
CoreXL Settings:

Interface Mgmt: CPU 0
Interface eth1-01: has multi queue enabled
Interface Sync: has multi queue enabled
Interface eth1-02: has multi queue enabled
Interface eth1-03: has multi queue enabled
Interface eth1-04: has multi queue enabled

 

Initially I ignored the messages indicating that multi queue was enabled and focused on determining why the cores were setup as both SNDs and FW workers. Which led me to a lot of reading and the initial question.

Now, after more reading on Multi queue it seems that this may be why I am seeing the cores as both SND and FW workers

Attaching an image of what I see in cpview and fwaccel stat.

I am struggling to understand if the health check warning is something to be concerned with or not, have seen it very clearly stated that running cores as both SND and FW workers is not generally recommended.

The other thing that is odd to me is the name shown in fwaccel stat "KPPAK" is new to me, typically see SND..

Thanks!

0 Kudos
PhoneBoy
Admin
Admin

I think we need Super Seven Commands output here to see what's going on.
https://community.checkpoint.com/t5/Scripts/S7PAC-Super-Seven-Performance-Assessment-Commands/m-p/40... 

0 Kudos
Timothy_Hall
Champion
Champion

What version and Jumbo HFA are you running?  What is the hardware model of your gateway?

What you have posted so far makes no sense to me, please provide Super Seven output.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
MarcP
Participant

R81.10 with Jumbo take 55

Appliance model is 6200B

Attaching output of s7pac, this cluster is not in production yet, so there is no real traffic passing thru this gw.

Thanks again!

0 Kudos
Timothy_Hall
Champion
Champion

The KPPAK would seem to indicate that PPAK/SecureXL is running in the kernel as opposed to process space; running PPAK in process space is listed as a new feature in R81.20 but it is common for features like this to be present in earlier releases unofficially; this new output must have been added in a Jumbo HFA or R81.10. 

My interpretation of the fw ctl affinity command is that USFW is enabled along with Dynamic Split/Balancing sk164155: Dynamic Balancing for CoreXL) which are both enabled by default.   What does the output of dynamic_balancing -p show?  If both are enabled I think this output is expected and the healthcheck script needs to be updated, as SNDs and Firewall workers/instances will indeed to be sharing cores (but hopefully only during split transitions) with Dynamic Balancing enabled.

 

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
MarcP
Participant

[Expert@GW-1:0]# dynamic_balancing -p
Dynamic Balancing is currently Off

Looking at sk164155 that you have mentioned above it appears that Dynamic Balancing is not supported on the 6200B platform.

0 Kudos
AmitShmuel
Employee
Employee

Since this is a 6200B, which only has 2 physical cores, and 4 logical cores, Dynamic Balancing will not be supported as it requires at least 4 physical cores.

I'm almost certain that it is expected for FW workers and SNDs to share cores on 2 cores machines, but, to be on the safe side:

  1. Run '$FWDIR/boot/fwboot corexl def_instance4_count' followed by 'echo $?' to verify 3 workers is indeed the default amount, if not, change the amount via 'cpconfig'.
    1. BTW, $FWDIR/boot/fwboot corexl enable' should set workers amount to default
  2. Run 'mq_mng -s auto' to set Multi-Queue settings to default.
  3. Reboot

Another thing to check is that '$FWDIR/conf/fwaffinity.conf' hasn't been changed, the only non-comment line should be "i default auto".

0 Kudos
MarcP
Participant

Thanks Amit,

Ran the commands and it shows 3 workers, did the 'mq_mng -s auto' and a reboot and everything is the same as before, still showing me the 4 cores operating as both SND and FW workers.

[Expert@GW-1:0]# $FWDIR/boot/fwboot corexl def_instance4_count
[Expert@GW-1:0]# echo $?
3
[Expert@GW-1:0]# mq_mng -s auto
[Expert@GW-1:0]#

Also checked fwaffinity.conf and it is in default state.

So, does this mean that the systems are in an expected "default" state?

If so, then I agree with Tim that the Health Check Script should be updated so it's not throwing a warning.

0 Kudos
PhoneBoy
Admin
Admin

@AmitShmuel is right that on systems with only two physical cores, seeing SND and workers on the same core is expected behavior.
Which suggests the healthcheck script should probably be updated to account for this.

@ShaiF can you take a look?

0 Kudos
ShaiF
Employee
Employee

@AmitShmuel  I'm familiar with 3/1 (fwk/ppak) for 4 cores machine. ppak/fw working on same core is always bad practice (locks, softlockups, latency...) if your official say is that we work 4/4 then please send mail to me and @AndyY and we'll adjust the test.

0 Kudos
AmitShmuel
Employee
Employee

3/1 will be used for 4 physical cores machines, such as 3200.

Another example where ppak/fw working on the same cores can be seen in 2 physical cores machines, such as 5200.

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events