Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Muazzam
Contributor
Contributor
Jump to solution

High CPU on Multi Queue Cores

Hardware: 13800 with 20 cores, 8/12 Split, no SMT.
OS: R80.20 Take 47
Blades enabled: None (just FW/VPN).

MQ is enabled on two 10g interfaces. The 4 CPU cores tied to these interfaces are running 75-85%, spikes up to 95%. One core is tied with fwd. The other 3 SND's are running 1-2%. Workers are running around 50%.

From Cpview:
Bandwidth 4-5 Gbps
800k-900K packets/sec, 10K conn/sec.
Netstat -ni is NOT showing any drops.

[Expert@13800:0]# fwaccel stats -s
Accelerated conns/Total conns : (-3%)
Accelerated pkts/Total pkts : (51%)
F2Fed pkts/Total pkts : (4%)
F2V pkts/Total pkts : (1%)
CPASXL pkts/Total pkts : (0%)
PSLXL pkts/Total pkts : (44%)

Question: what could be a reason for 44% PSLXL pkts/Total pkts?
What can be done to reduce load on the first 4 cores?

2 Solutions

Accepted Solutions
HeikoAnkenbrand
Champion Champion
Champion

>>> Question: what could be a reason for 44% PSLXL pkts/Total pkts?

PSLXL is the SecureXL medium path.

Medium path (PXL) - The CoreXL layer passes the packet to one of the CoreXL FW instances to perform the processing (even when CoreXL is disabled, the CoreXL infrastructure is used by SecureXL device to send the packet to the single FW instance that still functions). When Medium Path is available, TCP handshake is fully accelerated with SecureXL. Rulebase match is achieved for the first packet through an existing connection acceleration template. SYN-ACK and ACK packets are also fully accelerated. However, once data starts flowing, to stream it for Content Inspection, the packets will be now handled by a FWK instance. Any packets containing data will be sent to FWK for data extraction to build the data stream. RST, FIN and FIN-ACK packets once again are only handled by SecureXL as they do not contain any data that needs to be streamed. This path is available only when CoreXL is enabled.

Packet flow when the packet is handled by the SecureXL device, except for:

  • IPS (some protections)
  • VPN (in some configurations)
  • Application Control
  • Content Awareness
  • Anti-Virus
  • Anti-Bot
  • HTTPS Inspection
  • Proxy mode
  • Mobile Access
  • VoIP
  • Web Portals.

PXL vs. PSLXL - Technology name for combination of SecureXL and PSL. PXL was renamed to PSLXL in R80.20.

 

>>>What can be done to reduce load on the first 4 cores?

This is normal for MQ cores on high packet rate.

 

➜ CCSM Elite, CCME, CCTE ➜ www.checkpoint.tips

View solution in original post

HeikoAnkenbrand
Champion Champion
Champion
5 Replies
HeikoAnkenbrand
Champion Champion
Champion

>>> Question: what could be a reason for 44% PSLXL pkts/Total pkts?

PSLXL is the SecureXL medium path.

Medium path (PXL) - The CoreXL layer passes the packet to one of the CoreXL FW instances to perform the processing (even when CoreXL is disabled, the CoreXL infrastructure is used by SecureXL device to send the packet to the single FW instance that still functions). When Medium Path is available, TCP handshake is fully accelerated with SecureXL. Rulebase match is achieved for the first packet through an existing connection acceleration template. SYN-ACK and ACK packets are also fully accelerated. However, once data starts flowing, to stream it for Content Inspection, the packets will be now handled by a FWK instance. Any packets containing data will be sent to FWK for data extraction to build the data stream. RST, FIN and FIN-ACK packets once again are only handled by SecureXL as they do not contain any data that needs to be streamed. This path is available only when CoreXL is enabled.

Packet flow when the packet is handled by the SecureXL device, except for:

  • IPS (some protections)
  • VPN (in some configurations)
  • Application Control
  • Content Awareness
  • Anti-Virus
  • Anti-Bot
  • HTTPS Inspection
  • Proxy mode
  • Mobile Access
  • VoIP
  • Web Portals.

PXL vs. PSLXL - Technology name for combination of SecureXL and PSL. PXL was renamed to PSLXL in R80.20.

 

>>>What can be done to reduce load on the first 4 cores?

This is normal for MQ cores on high packet rate.

 

➜ CCSM Elite, CCME, CCTE ➜ www.checkpoint.tips
HeikoAnkenbrand
Champion Champion
Champion

More read here in my article:

R80.x - Security Gateway Architecture (Logical Packet Flow)

PSL F2F and PSLXL medium path:

R80.x - Security Gateway Architecture (Content Inspection)

MQ:

R80.x - Performance Tuning Tip - Multi Queue

 

➜ CCSM Elite, CCME, CCTE ➜ www.checkpoint.tips
Timothy_Hall
Legend Legend
Legend

There should be more than 4 queues assigned to those 10Gbps interface by Multi-Queue with an 8/12 split, unless those interfaces are using the igb driver (ethtool -i to check) which is limited to a maximum of 4 queues due to a driver limitation.  If this is the case there is nothing you can do about it, other than using a different NIC that supports the ixgbe driver which can have up to 16 queues.  If you are in fact using ixgbe, see sk154392: An available CPU Core is not handling any queue, when using Multi-Q.

It is also possible that your manual process affinity for the fwd daemon is interfering with the assignment of more SND/IRQ cores for traffic processing with Multi-Queue. 

As far as why PSLXL is 44% with so few blades enabled, this is probably due to the presence of microsoft-ds traffic (port 445) which by default will be sent to PSLXL.  You can confirm by running fwaccel conns | grep 445.  If you see a s/S flag for those connections that indicates the connection is going Medium Path.  Also look for other connections that have the s/S flag in the output of fwaccel conns for clues.

As far as what you can do about this, if you upgrade to R80.20 Jumbo HFA 103 or later you can force this traffic into the Accelerated Path with fast_accel as discussed here: sk156672: SecureXL Fast Accelerator (fw fast_accel) for R80.20 and above

Be warned however that doing this will "fathpath" the traffic with a minimum of inspection, and bad things security-wise can happen if a threat or other security problem happens in the fastpath'ed connections.  Also keep in mind that the load will increase on your SND/IRQ cores as this traffic is forced off the Firewall Workers into SecureXL; you may want to figure out what is going on with Multi-Queue not using all available SND/IRQ cores first.

 

 

 

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
Muazzam
Contributor
Contributor

Just want to share the results:

 

After upgrading the gateway to R80.20 T103, I do not see high value of PXL. This was definitely an issue with T47 of R80.20.

 

[Expert@13800]# fwaccel stats -s
Accelerated conns/Total conns : 680725/680725 (100%)
Accelerated pkts/Total pkts : 226544028526/238077462224 (95%)
F2Fed pkts/Total pkts : 11533433698/238077462224 (4%)
F2V pkts/Total pkts : 5507491567/238077462224 (2%)
CPASXL pkts/Total pkts : 0/238077462224 (0%)
PSLXL pkts/Total pkts : 48307/238077462224 (0%)

 

Timothy_Hall
Legend Legend
Legend

Depending on which blades are enabled, the drop in PSLXL path traffic after Jumbo HFA installation may be related to the fix here:

https://community.checkpoint.com/t5/General-Topics/First-impressions-R80-30-on-gateway-one-step-forw...

 

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events