Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Greatsamps
Contributor
Jump to solution

Gateway becoming swamped under high NAT load

We are an old Checkpoint customer who has recently had a stint with PFSense. We are in the process of migrating back to Checkpoint but have had quite a few problems.

We have sourced 2 Dell R330 4-Core servers with Intel X520 cards in them which are fully supported in the HCL. After several days of fighting, we gave up trying to install as an Open Server; every time as soon as the install tried to partition the disks, the install crashed with an anaconda error that i can find no reference to anywhere.

We then decided to load them with ESXI7 and run in a VM, not ideal but got us going.

ESXI has been configured with 1 virtual switch per nic, and 1 port group per vlan split between the switches.

  1. External traffic has a dedicated port.
  2. Sync has a dedicated port.
  3. Very busy VLAN has a dedicated port.
  4. Everything else shares the remaining port. The management vlan is on here as well which also has some heavy traffic present.

After a bit of fighting we got the ClusterXL cluster setup and have moved some light services over. The problem occurs when we move the heavy traffic onto it.

Our gateways have a lot of NAT to do, on the PF Sense boxes (8 core) there are currently 120,000 states or translations active. The CPU on the boxes is around 15%.

When trying to put this onto the checkpoint the thing basically grinds to a halt. Connectivity on the heavy VLAN's is very intermittent, and even when trying to install a policy, 50% of the time it fails due to lack of IP connectivity to the gateways. What is strange however is that the CPU is not getting pegged, 25% was the highest i saw it.

As soon as we take this heavy traffic off of it, everything is happy again.

For all the heavy traffic, NAT has been configured to hide behind an address range of 64 addresses.

It feels like the network interfaces are getting saturated and as a result packets are being dropped, which can't be the case. We have 4 x 10GB interfaces, and current traffic through the PFSense boxes is around 50Mb.

Before trying with this current configuration, we tried this on a Hyper-V VM with a single NIC. We had exact the same problem. At the time i put that down to being a bit too optimistic with what a single NIC VM could handle which is why we did it again on a properly resourced setup.

I can't believe that a PFSense box can cope with this high level of NAT better than Checkpoint, but at this stage this is what it seems.

Any ideas where i can go with this?

 

 

0 Kudos
1 Solution

Accepted Solutions
Greatsamps
Contributor

So i managed to get to the bottom of it. After some more time looking it under stress i noticed that we were hitting a connection limit. For some reason this was set to 25,000 connections rather than automatic. Changing it to automatic has sorted everything out.

Thanks for the pointers!

View solution in original post

12 Replies
Greatsamps
Contributor

In terms of an update. We have moved all heavy traffic off of the management network onto the existing VLAN with the remainder of the heavy traffic. Hopefully this should ensure no gateway communication issues when we divert the traffic back through checkpoint at the weekend.

Can someone please advise what diagnostic commands we should be running to try and narrow down what is causing the performance issues?

0 Kudos
G_W_Albrecht
Legend Legend
Legend

Why not contact CP TAC to get help ?

CCSP - CCSE / CCTE / CTPS / CCME / CCSM Elite / SMB Specialist
0 Kudos
Greatsamps
Contributor

Hi.

This is running on an evaluation license currently. Before renewing licensing and support we need to make sure it will achieve what we need of it.

0 Kudos
G_W_Albrecht
Legend Legend
Legend

Then talk to your local CP SE !

CCSP - CCSE / CCTE / CTPS / CCME / CCSM Elite / SMB Specialist
0 Kudos
Chris_Atkinson
Employee Employee
Employee

Out of interest are you using routing or proxy-ARP for the NAT IPs?

CCSM R77/R80/ELITE
0 Kudos
Greatsamps
Contributor

AFAIK we are using routing. That block of 64 addresses is routed to the cluster WAN interface.

Additionally, i checked the logs, and there were not any errors around the time the heavy traffic was going through the gateway.

0 Kudos
Timothy_Hall
Legend Legend
Legend

I doubt it is the NAT load grinding you to a halt.  Please provide the output of the Super Seven commands, ideally executed while the box is under the heavy load:

S7PAC - Super Seven Performance Assessment Commands

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
Greatsamps
Contributor

Sorry for delay, but we can only move this traffic over to the Checkpoint POC at weekends. Here is output of commands.

Additionally, even though we removed the heavy traffic from the management network, we still had timeouts on pushing polices to the gateways. The management network was on its own NIC with negligible traffic on it.

 

+-----------------------------------------------------------------------------+
| Super Seven Performance Assessment Commands v0.5 (Thanks to Timothy Hall) |
+-----------------------------------------------------------------------------+
| Inspecting your environment: OK |
| This is a firewall....(continuing) |
| |
| Referred pagenumbers are to be found in the following book: |
| Max Power: Check Point Firewall Performance Optimization - Second Edition |
| |
| Available at http://www.maxpowerfirewalls.com/ |
| |
+-----------------------------------------------------------------------------+
| Command #1: fwaccel stat |
| |
| Check for : Accelerator Status must be enabled (R77.xx/R80.10 versions) |
| Status must be enabled (R80.20 and higher) |
| Accept Templates must be enabled |
| Message "disabled" from (low rule number) = bad |
| |
| Chapter 9: SecureXL throughput acceleration |
| Page 278 |
+-----------------------------------------------------------------------------+
| Output: |
+---------------------------------------------------------------------------------+
|Id|Name |Status |Interfaces |Features |
+---------------------------------------------------------------------------------+
|0 |SND |enabled |eth0,eth1,eth2,eth3,eth4,|Acceleration,Cryptography |
| | | |eth5,eth6,eth7,eth8,eth9 | |
| | | | |Crypto: Tunnel,UDPEncap,MD5, |
| | | | |SHA1,3DES,DES,AES-128,AES-256,|
| | | | |ESP,LinkSelection,DynamicVPN, |
| | | | |NatTraversal,AES-XCBC,SHA256, |
| | | | |SHA384,SHA512 |
+---------------------------------------------------------------------------------+

Accept Templates : enabled
Drop Templates : disabled
NAT Templates : enabled


+-----------------------------------------------------------------------------+
| Command #2: fwaccel stats -s |
| |
| Check for : Accelerated conns/Totals conns: >25% good, >50% great |
| Accelerated pkts/Total pkts : >50% great |
| PXL pkts/Total pkts : >50% OK |
| F2Fed pkts/Total pkts : <30% good, <10% great |
| |
| Chapter 9: SecureXL throughput acceleration |
| Page 287, Packet/Throughput Acceleration: The Three Kernel Paths |
+-----------------------------------------------------------------------------+
| Output: |
Accelerated conns/Total conns : 24244/24248 (99%)
Accelerated pkts/Total pkts : 406167491/420691241 (96%)
F2Fed pkts/Total pkts : 14523750/420691241 (3%)
F2V pkts/Total pkts : 1896781/420691241 (0%)
CPASXL pkts/Total pkts : 0/420691241 (0%)
PSLXL pkts/Total pkts : 3894/420691241 (0%)
CPAS pipeline pkts/Total pkts : 0/420691241 (0%)
PSL pipeline pkts/Total pkts : 0/420691241 (0%)
CPAS inline pkts/Total pkts : 0/420691241 (0%)
PSL inline pkts/Total pkts : 0/420691241 (0%)
QOS inbound pkts/Total pkts : 0/420691241 (0%)
QOS outbound pkts/Total pkts : 0/420691241 (0%)
Corrected pkts/Total pkts : 0/420691241 (0%)


+-----------------------------------------------------------------------------+
| Command #3: grep -c ^processor /proc/cpuinfo && /sbin/cpuinfo |
| |
| Check for : If number of cores is roughly double what you are excpecting, |
| hyperthreading may be enabled |
| |
| Chapter 7: CoreXL Tuning |
| Page 239 |
+-----------------------------------------------------------------------------+
| Output: |
4

 

+-----------------------------------------------------------------------------+
| Command #4: fw ctl affinity -l -r |
| |
| Check for : SND/IRQ/Dispatcher Cores, # of CPU's allocated to interface(s) |
| Firewall Workers/INSPECT Cores, # of CPU's allocated to fw_x |
| R77.30: Support processes executed on ALL CPU's |
| R80.xx: Support processes only executed on Firewall Worker Cores|
| |
| Chapter 7: CoreXL Tuning |
| Page 221 |
+-----------------------------------------------------------------------------+
| Output: |
CPU 0:
CPU 1: fw_1
mpdaemon fwd in.asessiond cprid lpd vpnd pdpd core_uploader pepd cprid cpd
CPU 2: fw_2
mpdaemon fwd in.asessiond cprid lpd vpnd pdpd core_uploader pepd cprid cpd
CPU 3: fw_0
mpdaemon fwd in.asessiond cprid lpd vpnd pdpd core_uploader pepd cprid cpd
All: eth0 eth1 eth2 eth3 eth4 eth5 eth6 eth7 eth8 eth9


+-----------------------------------------------------------------------------+
| Command #5: netstat -ni |
| |
| Check for : RX/TX errors |
| RX-DRP % should be <0.1% calculated by (RX-DRP/RX-OK)*100 |
| TX-ERR might indicate Fast Ethernet/100Mbps Duplex Mismatch |
| |
| Chapter 2: Layers 1&2 Performance Optimization |
| Page 28-35 |
| |
| Chapter 7: CoreXL Tuning |
| Page 204 |
| Page 206 (Network Buffering Misses) |
+-----------------------------------------------------------------------------+
| Output: |
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0 1500 0 3541645 0 0 0 2379741 0 0 0 BMRU
eth1 1500 0 212796665 0 0 0 199639598 0 0 0 BMRU
eth2 1500 0 15599188 0 0 0 16592379 0 0 0 BMRU
eth3 1500 0 3451769 0 0 0 4768745 0 0 0 BMRU
eth4 1500 0 6550841 0 0 0 10364709 0 0 0 BMRU
eth5 1500 0 9500965 0 0 0 9651053 0 0 0 BMRU
eth6 1500 0 15439074 0 0 0 9479612 0 0 0 BMRU
eth7 1500 0 111170 0 0 0 119436 0 0 0 BMRU
eth8 1500 0 1296925 0 0 0 1784413 0 0 0 BMRU
eth9 1500 0 152823124 0 0 0 154795415 0 0 0 BMRU
lo 65536 0 235433 0 0 0 235433 0 0 0 ALMPNRU

interface eth0: There were no RX drops in the past 0.5 seconds
interface eth0 rx_missed_errors : 0
interface eth0 rx_fifo_errors :
interface eth0 rx_no_buffer_count: 0

interface eth1: There were no RX drops in the past 0.5 seconds
interface eth1 rx_missed_errors : 0
interface eth1 rx_fifo_errors :
interface eth1 rx_no_buffer_count: 0

interface eth2: There were no RX drops in the past 0.5 seconds
interface eth2 rx_missed_errors : 0
interface eth2 rx_fifo_errors :
interface eth2 rx_no_buffer_count: 0

interface eth3: There were no RX drops in the past 0.5 seconds
interface eth3 rx_missed_errors : 0
interface eth3 rx_fifo_errors :
interface eth3 rx_no_buffer_count: 0

interface eth4: There were no RX drops in the past 0.5 seconds
interface eth4 rx_missed_errors : 0
interface eth4 rx_fifo_errors :
interface eth4 rx_no_buffer_count: 0

interface eth5: There were no RX drops in the past 0.5 seconds
interface eth5 rx_missed_errors : 0
interface eth5 rx_fifo_errors :
interface eth5 rx_no_buffer_count: 0

interface eth6: There were no RX drops in the past 0.5 seconds
interface eth6 rx_missed_errors : 0
interface eth6 rx_fifo_errors :
interface eth6 rx_no_buffer_count: 0

interface eth7: There were no RX drops in the past 0.5 seconds
interface eth7 rx_missed_errors : 0
interface eth7 rx_fifo_errors :
interface eth7 rx_no_buffer_count: 0

interface eth8: There were no RX drops in the past 0.5 seconds
interface eth8 rx_missed_errors : 0
interface eth8 rx_fifo_errors :
interface eth8 rx_no_buffer_count: 0

interface eth9: There were no RX drops in the past 0.5 seconds
interface eth9 rx_missed_errors : 0
interface eth9 rx_fifo_errors :
interface eth9 rx_no_buffer_count: 0

 

+-----------------------------------------------------------------------------+
| Command #6: fw ctl multik stat |
| |
| Check for : Large # of conns on Worker 0 - IPSec VPN/VoIP? |
| Large imbalance of connections on a single or multiple Workers |
| |
| Chapter 7: CoreXL Tuning |
| Page 241 |
| |
| Chapter 8: CoreXL VPN Optimization |
| Page 256 |
+-----------------------------------------------------------------------------+
| Output: |
ID | Active | CPU | Connections | Peak
----------------------------------------------
0 | Yes | 3 | 8140 | 8391
1 | Yes | 1 | 8380 | 8382
2 | Yes | 2 | 7878 | 8196

+-----------------------------------------------------------------------------+
| Command #7: cpstat os -f multi_cpu -o 1 -c 5 |
| |
| Check for : High SND/IRQ Core Utilization |
| High Firewall Worker Core Utilization |
| |
| Chapter 6: CoreXL & Multi-Queue |
| Page 173 |
+-----------------------------------------------------------------------------+
| Output: |

 

Processors load
---------------------------------------------------------------------------------
|CPU#|User Time(%)|System Time(%)|Idle Time(%)|Usage(%)|Run queue|Interrupts/sec|
---------------------------------------------------------------------------------
| 1| 0| 59| 41| 59| ?| 33444|
| 2| 6| 2| 92| 8| ?| 33444|
| 3| 5| 2| 92| 8| ?| 33444|
| 4| 6| 3| 91| 9| ?| 33444|
---------------------------------------------------------------------------------

 

 

Processors load
---------------------------------------------------------------------------------
|CPU#|User Time(%)|System Time(%)|Idle Time(%)|Usage(%)|Run queue|Interrupts/sec|
---------------------------------------------------------------------------------
| 1| 0| 59| 41| 59| ?| 33444|
| 2| 6| 2| 92| 8| ?| 33444|
| 3| 5| 2| 92| 8| ?| 33444|
| 4| 6| 3| 91| 9| ?| 33444|
---------------------------------------------------------------------------------

 

 

Processors load
---------------------------------------------------------------------------------
|CPU#|User Time(%)|System Time(%)|Idle Time(%)|Usage(%)|Run queue|Interrupts/sec|
---------------------------------------------------------------------------------
| 1| 0| 56| 44| 56| ?| 62684|
| 2| 6| 2| 93| 7| ?| 62682|
| 3| 6| 2| 93| 7| ?| 31341|
| 4| 6| 2| 93| 7| ?| 31341|
---------------------------------------------------------------------------------

 

 

Processors load
---------------------------------------------------------------------------------
|CPU#|User Time(%)|System Time(%)|Idle Time(%)|Usage(%)|Run queue|Interrupts/sec|
---------------------------------------------------------------------------------
| 1| 0| 56| 44| 56| ?| 62684|
| 2| 6| 2| 93| 7| ?| 62682|
| 3| 6| 2| 93| 7| ?| 31341|
| 4| 6| 2| 93| 7| ?| 31341|
---------------------------------------------------------------------------------

 

 

Processors load
---------------------------------------------------------------------------------
|CPU#|User Time(%)|System Time(%)|Idle Time(%)|Usage(%)|Run queue|Interrupts/sec|
---------------------------------------------------------------------------------
| 1| 0| 58| 42| 58| ?| 63937|
| 2| 5| 2| 94| 6| ?| 31970|
| 3| 4| 1| 95| 5| ?| 63941|
| 4| 6| 2| 93| 7| ?| 63941|
---------------------------------------------------------------------------------


+-----------------------------------------------------------------------------+
| Thanks for using s7pac |
+-----------------------------------------------------------------------------+

0 Kudos
Greatsamps
Contributor

So i managed to get to the bottom of it. After some more time looking it under stress i noticed that we were hitting a connection limit. For some reason this was set to 25,000 connections rather than automatic. Changing it to automatic has sorted everything out.

Thanks for the pointers!

Timothy_Hall
Legend Legend
Legend

Yep a classic bottleneck, thanks for the followup.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
Greatsamps
Contributor

So, spoke too soon on this.

 

The above change did solve most the problems, but today we started getting reports of 30% packet loss. Upon running super-7 commands, we can see that 1 of the CPU cores is pegged at 100%.

 

Any ideas?

 

+-----------------------------------------------------------------------------+
| Super Seven Performance Assessment Commands v0.5 (Thanks to Timothy Hall) |
+-----------------------------------------------------------------------------+
| Inspecting your environment: OK |
| This is a firewall....(continuing) |
| |
| Referred pagenumbers are to be found in the following book: |
| Max Power: Check Point Firewall Performance Optimization - Second Edition |
| |
| Available at http://www.maxpowerfirewalls.com/ |
| |
+-----------------------------------------------------------------------------+
| Command #1: fwaccel stat |
| |
| Check for : Accelerator Status must be enabled (R77.xx/R80.10 versions) |
| Status must be enabled (R80.20 and higher) |
| Accept Templates must be enabled |
| Message "disabled" from (low rule number) = bad |
| |
| Chapter 9: SecureXL throughput acceleration |
| Page 278 |
+-----------------------------------------------------------------------------+
| Output: |
+---------------------------------------------------------------------------------+
|Id|Name |Status |Interfaces |Features |
+---------------------------------------------------------------------------------+
|0 |SND |enabled |eth0,eth1,eth2,eth3,eth4,|Acceleration,Cryptography |
| | | |eth5,eth6,eth7,eth8,eth9 | |
| | | | |Crypto: Tunnel,UDPEncap,MD5, |
| | | | |SHA1,3DES,DES,AES-128,AES-256,|
| | | | |ESP,LinkSelection,DynamicVPN, |
| | | | |NatTraversal,AES-XCBC,SHA256, |
| | | | |SHA384,SHA512 |
+---------------------------------------------------------------------------------+

Accept Templates : enabled
Drop Templates : disabled
NAT Templates : enabled


+-----------------------------------------------------------------------------+
| Command #2: fwaccel stats -s |
| |
| Check for : Accelerated conns/Totals conns: >25% good, >50% great |
| Accelerated pkts/Total pkts : >50% great |
| PXL pkts/Total pkts : >50% OK |
| F2Fed pkts/Total pkts : <30% good, <10% great |
| |
| Chapter 9: SecureXL throughput acceleration |
| Page 287, Packet/Throughput Acceleration: The Three Kernel Paths |
+-----------------------------------------------------------------------------+
| Output: |
Accelerated conns/Total conns : 99746/99752 (99%)
Accelerated pkts/Total pkts : 21611940842/21802032162 (99%)
F2Fed pkts/Total pkts : 190091320/21802032162 (0%)
F2V pkts/Total pkts : 52395332/21802032162 (0%)
CPASXL pkts/Total pkts : 0/21802032162 (0%)
PSLXL pkts/Total pkts : 699361/21802032162 (0%)
CPAS pipeline pkts/Total pkts : 0/21802032162 (0%)
PSL pipeline pkts/Total pkts : 0/21802032162 (0%)
CPAS inline pkts/Total pkts : 0/21802032162 (0%)
PSL inline pkts/Total pkts : 0/21802032162 (0%)
QOS inbound pkts/Total pkts : 0/21802032162 (0%)
QOS outbound pkts/Total pkts : 0/21802032162 (0%)
Corrected pkts/Total pkts : 0/21802032162 (0%)


+-----------------------------------------------------------------------------+
| Command #3: grep -c ^processor /proc/cpuinfo && /sbin/cpuinfo |
| |
| Check for : If number of cores is roughly double what you are excpecting, |
| hyperthreading may be enabled |
| |
| Chapter 7: CoreXL Tuning |
| Page 239 |
+-----------------------------------------------------------------------------+
| Output: |
4

 

+-----------------------------------------------------------------------------+
| Command #4: fw ctl affinity -l -r |
| |
| Check for : SND/IRQ/Dispatcher Cores, # of CPU's allocated to interface(s) |
| Firewall Workers/INSPECT Cores, # of CPU's allocated to fw_x |
| R77.30: Support processes executed on ALL CPU's |
| R80.xx: Support processes only executed on Firewall Worker Cores|
| |
| Chapter 7: CoreXL Tuning |
| Page 221 |
+-----------------------------------------------------------------------------+
| Output: |
CPU 0:
CPU 1: fw_1
mpdaemon fwd in.asessiond cprid lpd vpnd pdpd core_uploader pepd cprid cpd
CPU 2: fw_2
mpdaemon fwd in.asessiond cprid lpd vpnd pdpd core_uploader pepd cprid cpd
CPU 3: fw_0
mpdaemon fwd in.asessiond cprid lpd vpnd pdpd core_uploader pepd cprid cpd
All: eth0 eth1 eth2 eth3 eth4 eth5 eth6 eth7 eth8 eth9


+-----------------------------------------------------------------------------+
| Command #5: netstat -ni |
| |
| Check for : RX/TX errors |
| RX-DRP % should be <0.1% calculated by (RX-DRP/RX-OK)*100 |
| TX-ERR might indicate Fast Ethernet/100Mbps Duplex Mismatch |
| |
| Chapter 2: Layers 1&2 Performance Optimization |
| Page 28-35 |
| |
| Chapter 7: CoreXL Tuning |
| Page 204 |
| Page 206 (Network Buffering Misses) |
+-----------------------------------------------------------------------------+
| Output: |
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0 1500 0 333585475 0 0 0 186503215 0 0 0 BMRU
eth1 1500 0 10803106059 0 0 0 10702700306 0 0 0 BMRU
eth2 1500 0 180716752 0 0 0 196434517 0 0 0 BMRU
eth3 1500 0 36528913 0 0 0 49276381 0 0 0 BMRU
eth4 1500 0 49520389 0 0 0 71138532 0 0 0 BMRU
eth5 1500 0 53774700 0 0 0 53435859 0 0 0 BMRU
eth6 1500 0 110031698 0 0 0 57612016 0 0 0 BMRU
eth7 1500 0 625591 0 0 0 758441 0 0 0 BMRU
eth8 1500 0 8062989 0 0 0 46883491 0 0 0 BMRU
eth9 1500 0 10143744248 0 0 0 10348489291 0 0 0 BMRU
lo 65536 0 1307529 0 0 0 1307529 0 0 0 ALMPNRU

interface eth0: There were no RX drops in the past 0.5 seconds
interface eth0 rx_missed_errors : 0
interface eth0 rx_fifo_errors :
interface eth0 rx_no_buffer_count: 0

interface eth1: There were no RX drops in the past 0.5 seconds
interface eth1 rx_missed_errors : 0
interface eth1 rx_fifo_errors :
interface eth1 rx_no_buffer_count: 0

interface eth2: There were no RX drops in the past 0.5 seconds
interface eth2 rx_missed_errors : 0
interface eth2 rx_fifo_errors :
interface eth2 rx_no_buffer_count: 0

interface eth3: There were no RX drops in the past 0.5 seconds
interface eth3 rx_missed_errors : 0
interface eth3 rx_fifo_errors :
interface eth3 rx_no_buffer_count: 0

interface eth4: There were no RX drops in the past 0.5 seconds
interface eth4 rx_missed_errors : 0
interface eth4 rx_fifo_errors :
interface eth4 rx_no_buffer_count: 0

interface eth5: There were no RX drops in the past 0.5 seconds
interface eth5 rx_missed_errors : 0
interface eth5 rx_fifo_errors :
interface eth5 rx_no_buffer_count: 0

interface eth6: There were no RX drops in the past 0.5 seconds
interface eth6 rx_missed_errors : 0
interface eth6 rx_fifo_errors :
interface eth6 rx_no_buffer_count: 0

interface eth7: There were no RX drops in the past 0.5 seconds
interface eth7 rx_missed_errors : 0
interface eth7 rx_fifo_errors :
interface eth7 rx_no_buffer_count: 0

interface eth8: There were no RX drops in the past 0.5 seconds
interface eth8 rx_missed_errors : 0
interface eth8 rx_fifo_errors :
interface eth8 rx_no_buffer_count: 0

interface eth9: There were no RX drops in the past 0.5 seconds
interface eth9 rx_missed_errors : 0
interface eth9 rx_fifo_errors :
interface eth9 rx_no_buffer_count: 0

 

+-----------------------------------------------------------------------------+
| Command #6: fw ctl multik stat |
| |
| Check for : Large # of conns on Worker 0 - IPSec VPN/VoIP? |
| Large imbalance of connections on a single or multiple Workers |
| |
| Chapter 7: CoreXL Tuning |
| Page 241 |
| |
| Chapter 8: CoreXL VPN Optimization |
| Page 256 |
+-----------------------------------------------------------------------------+
| Output: |
ID | Active | CPU | Connections | Peak
----------------------------------------------
0 | Yes | 3 | 35694 | 78610
1 | Yes | 1 | 34786 | 77164
2 | Yes | 2 | 35798 | 79274

+-----------------------------------------------------------------------------+
| Command #7: cpstat os -f multi_cpu -o 1 -c 5 |
| |
| Check for : High SND/IRQ Core Utilization |
| High Firewall Worker Core Utilization |
| |
| Chapter 6: CoreXL & Multi-Queue |
| Page 173 |
+-----------------------------------------------------------------------------+
| Output: |

 

Processors load
---------------------------------------------------------------------------------
|CPU#|User Time(%)|System Time(%)|Idle Time(%)|Usage(%)|Run queue|Interrupts/sec|
---------------------------------------------------------------------------------
| 1| 0| 100| 0| 100| ?| 25455|
| 2| 7| 3| 90| 10| ?| 25455|
| 3| 7| 3| 90| 10| ?| 25455|
| 4| 7| 3| 90| 10| ?| 25455|
---------------------------------------------------------------------------------

 

 

Processors load
---------------------------------------------------------------------------------
|CPU#|User Time(%)|System Time(%)|Idle Time(%)|Usage(%)|Run queue|Interrupts/sec|
---------------------------------------------------------------------------------
| 1| 0| 100| 0| 100| ?| 25455|
| 2| 7| 3| 90| 10| ?| 25455|
| 3| 7| 3| 90| 10| ?| 25455|
| 4| 7| 3| 90| 10| ?| 25455|
---------------------------------------------------------------------------------

 

 

Processors load
---------------------------------------------------------------------------------
|CPU#|User Time(%)|System Time(%)|Idle Time(%)|Usage(%)|Run queue|Interrupts/sec|
---------------------------------------------------------------------------------
| 1| 0| 100| 0| 100| ?| 24431|
| 2| 6| 2| 92| 8| ?| 24434|
| 3| 5| 2| 93| 7| ?| 48868|
| 4| 6| 2| 91| 9| ?| 24434|
---------------------------------------------------------------------------------

 

 

Processors load
---------------------------------------------------------------------------------
|CPU#|User Time(%)|System Time(%)|Idle Time(%)|Usage(%)|Run queue|Interrupts/sec|
---------------------------------------------------------------------------------
| 1| 0| 100| 0| 100| ?| 24431|
| 2| 6| 2| 92| 8| ?| 24434|
| 3| 5| 2| 93| 7| ?| 48868|
| 4| 6| 2| 91| 9| ?| 24434|
---------------------------------------------------------------------------------

 

 

Processors load
---------------------------------------------------------------------------------
|CPU#|User Time(%)|System Time(%)|Idle Time(%)|Usage(%)|Run queue|Interrupts/sec|
---------------------------------------------------------------------------------
| 1| 0| 100| 0| 100| ?| 24823|
| 2| 5| 2| 93| 7| ?| 49640|
| 3| 5| 2| 93| 7| ?| 24820|
| 4| 6| 3| 91| 9| ?| 49640|
---------------------------------------------------------------------------------


+-----------------------------------------------------------------------------+
| Thanks for using s7pac |
+-----------------------------------------------------------------------------+

0 Kudos
Greatsamps
Contributor

Given its a 4-core box and has a lot of NAT to do, should we change to a 2/2 split on CoreXL?

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events