Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
AaronCP
Advisor

SecureXL optimisation

Good evening CheckMates,

 

I am looking for some advice on how I can improve the number of accelerated connections on our perimeter gateway. Here is the output from the gateway:

 

[Expert@:0]# fwaccel stats -s
Accelerated conns/Total conns : 2846/16335 (17%)
Accelerated pkts/Total pkts : 513956107/533355811 (96%)
F2Fed pkts/Total pkts : 19399704/533355811 (3%)
F2V pkts/Total pkts : 4110751/533355811 (0%)
CPASXL pkts/Total pkts : 7421495/533355811 (1%)
PSLXL pkts/Total pkts : 332024567/533355811 (62%)
CPAS pipeline pkts/Total pkts : 0/533355811 (0%)
PSL pipeline pkts/Total pkts : 0/533355811 (0%)
CPAS inline pkts/Total pkts : 0/533355811 (0%)
PSL inline pkts/Total pkts : 0/533355811 (0%)
QOS inbound pkts/Total pkts : 0/533355811 (0%)
QOS outbound pkts/Total pkts : 0/533355811 (0%)
Corrected pkts/Total pkts : 0/533355811 (0%)
[Expert@:0]# fwaccel stat
+---------------------------------------------------------------------------------+
|Id|Name |Status |Interfaces |Features |
+---------------------------------------------------------------------------------+
|0 |SND |enabled |Mgmt,eth1-03,eth2-01, |
| | | |eth1-06,eth1-07,eth1-08, |
| | | |eth2-02,eth2-03,eth2-04 |Acceleration,Cryptography |
| | | | |Crypto: Tunnel,UDPEncap,MD5, |
| | | | |SHA1,NULL,3DES,DES,AES-128, |
| | | | |AES-256,ESP,LinkSelection, |
| | | | |DynamicVPN,NatTraversal, |
| | | | |AES-XCBC,SHA256,SHA384 |
+---------------------------------------------------------------------------------+

Accept Templates : disabled by Firewall
Layer Trust to Walled Garden disables template offloads from rule #9
Throughput acceleration still enabled.
Drop Templates : enabled
NAT Templates : disabled by Firewall
Layer Trust to Walled Garden disables template offloads from rule #9
Throughput acceleration still enabled.

The accelerated packets is at 96%, with the F2F packets at 3% - but I'm wondering if focussing on increasing the number of accelerated connections would improve the performance at all? The second output shows that templates are being offloaded from rule 9, however that isn't entirely accurate. I am using inline layers in the ruleset and the "Trust to Walled Garden" inline layer is right above the clean-up rule. The rule XXX.9 is a rule for our Linux NFS servers and I believe the NFS services are known to impact on SecureXL templating, however I thought with this rule being so close to the bottom of the ruleset that it wouldn't have this impact on the connection templating.

 

We are running R80.40 T158 on a 15000 appliance. It is has 32 cores, 4 of which are assigned to SND. Given the gateway is accelerating 96% of the packets, would it be a good idea to increase the number of SND cores?

 

Any advice would be appreciated, as always! 😊

 

Thanks,

 

Aaron.

0 Kudos
10 Replies
PhoneBoy
Admin
Admin

Output of enabled_blades?
You have stuff in PSLXL, which suggests some level of advanced inspection is being done for access control or threat prevention.
I assume that would impact templating also.

That said, 96% of your packets are getting accelerated.
Unless there's an actual performance issue, I'd say you're already in decent shape.

the_rock
Legend
Legend

I will let @Timothy_Hall respond to this, as he is in my opinion, the guru in secure xl.

caw001
Employee Employee
Employee

Instead of manual CoreXL/SND assignment, it may be worth looking into enabling CoreXL dynamic balancing since your appliances support it and you're on R80.40

https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solut...

AaronCP
Advisor

Hey @caw001,

 

Thanks for the SK! I will definitely be looking into implementing this.

0 Kudos
Timothy_Hall
Legend Legend
Legend

Need to see output of enabled_blades and netstat -ni.

The 96% of traffic that is accelerated is only being handled by your 4 SND cores, netstat -ni will tell if they are able to keep up.  You will probably need more SND cores.

Increasing the templating/conns rate will probably not make a huge difference unless you have a very high new connections rate (you can check this in cpview), due to the use of Column-based matching starting in R80.10.  Rulebase lookups are done on the worker/instance cores anyway which I'd imagine are not very busy.

 

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
AaronCP
Advisor

Hi @Timothy_Hall,

 

Here is the output of enabled_bladesnetstat -ni:

 

[Expert@:0]# enabled_blades
fw vpn urlf av appi ips identityServer SSL_INSPECT anti_bot mon

 

[Expert@:0]# netstat -ni
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
Mgmt 1500 0 384765362 0 20882 15248 398446895 0 0 0 BMRU
eth1-03 1500 0 902378366 0 5634 12839 697623630 0 0 0 BMRU
eth1-03.581 1500 0 900376330 0 226076 0 695448107 0 1712314 0 BMRU
eth1-03.686 1500 0 361402 0 92 0 722772 0 2 0 BMRU
eth1-03.687 1500 0 637361 0 22234 0 1168873 0 7 0 BMRU
eth1-03.743 1500 0 378622 0 0 0 284656 0 10 0 BMRU
eth1-06 1500 0 107088539 0 3392 0 123824500 0 0 0 BMRU
eth1-06.245 1500 0 730980 0 24 0 535540 0 6 0 BMRU
eth1-06.653 1500 0 1258448 0 0 0 5162 0 0 0 BMRU
eth1-06.714 1500 0 0 0 0 0 1700 0 0 0 BMRU
eth1-06.1237 1500 0 105047172 0 505 0 123282872 0 382714 0 BMRU
eth1-07 1500 0 87130863 0 0 0 213715171 0 0 0 BMRU
eth1-07.883 1500 0 20684965 0 648 0 30742018 0 790 0 BMRU
eth1-07.1500 1500 0 62166146 0 13 0 180804616 0 556382 0 BMRU
eth1-07.1919 1500 0 2375698 0 0 0 1790027 0 19 0 BMRU
eth1-07.1920 1500 0 1907809 0 1898 0 379292 0 4 0 BMRU
eth1-08 1500 0 16764049 0 10337 0 11586790 0 0 0 BMRU
eth2-01 1500 0 846477148 0 10176 0 1477009658 0 0 0 BMRU
eth2-01.10 1500 0 846467807 0 0 0 1477010086 0 4 0 BMRU
eth2-02 1500 0 2711422160 3 10176 0 2894806629 0 0 0 BMRU
eth2-02.91 1500 0 30318227 0 0 0 1965481 0 4 0 BMRU
eth2-02.155 1500 0 39643313 0 0 0 21983075 0 0 0 BMRU
eth2-02.156 1500 0 40983565 0 0 0 22244626 0 0 0 BMRU
eth2-02.176 1500 0 17 0 0 0 1699 0 0 0 BMRU
eth2-02.177 1500 0 31 0 0 0 1706 0 0 0 BMRU
eth2-02.178 1500 0 24 0 0 0 1706 0 0 0 BMRU
eth2-02.179 1500 0 1377013 0 0 0 6134425 0 0 0 BMRU
eth2-02.286 1500 0 17 0 0 0 1697 0 0 0 BMRU
eth2-02.302 1500 0 0 0 0 0 1697 0 0 0 BMRU
eth2-02.315 1500 0 0 0 0 0 1697 0 0 0 BMRU
eth2-02.397 1500 0 1457829243 0 0 0 1431690269 0 0 0 BMRU
eth2-02.544 1500 0 2876789 0 0 0 2681247 0 0 0 BMRU
eth2-02.582 1500 0 2082825 0 0 0 3620630 0 0 0 BMRU
eth2-02.652 1500 0 533620047 0 0 0 781447417 0 0 0 BMRU
eth2-02.1950 1500 0 602706466 0 830 0 623030146 0 4 0 BMRU
eth2-03 1500 0 32875317 0 0 0 68723287 0 0 0 BMRU
eth2-04 1500 0 2984694048 0 0 0 2296906261 0 0 0 BMRU
lo 65536 0 4407197 0 0 0 4407197 0 0 0 ALPNORU
vpnt11 1400 0 0 0 0 0 0 0 0 0 MOPRU

0 Kudos
Timothy_Hall
Legend Legend
Legend

If you look at just the stats for the leading non-tagged physical interfaces it appears that your SNDs are keeping up with the handling of packets.

However the statistics on the tagged subinterfaces are a little strange:

eth1-03 1500 0 902378366 0 5634 12839 697623630 0 0 0 BMRU
eth1-03.581 1500 0 900376330 0 226076 0 695448107 0 1712314 0 BMRU
eth1-03.686 1500 0 361402 0 92 0 722772 0 2 0 BMRU
eth1-03.687 1500 0 637361 0 22234 0 1168873 0 7 0 BMRU
eth1-03.743 1500 0 378622 0 0 0 284656 0 10 0 BMRU

I've highlighted the RX-DRP and TX-DRP counters.  Normally the RX-DRP and TX-DRP counters accumulated on all the subinterfaces should add up to what is displayed on the leading interface, but that isn't happening.  As an example if you add up RX-DRP for the four subinterfaces the sum is 248,402 RX-DRPs but the leading interface is only showing 5,634 of them.  Also the large number of TX-DRPs on eth1-03.581 is rather concerning as it is pretty rare to see TX-DRPs at all.  This may be some kind of change in how the counters are reported in the latest network drivers.

Please post the following outputs:

ethtool -i eth1-03

ethtool -S eth1-03

mq_mng -o -v -a

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
AaronCP
Advisor

Hey @Timothy_Hall, as requested:

 

[Expert@:0]# ethtool -i eth1-03
driver: igb
version: 5.3.5.20
firmware-version: 1.63, 0x800009fb
expansion-rom-version:
bus-info: 0000:86:00.2
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: no
[Expert@:0]# ethtool -S eth1-03
NIC statistics:
rx_packets: 1723237140
tx_packets: 1315227525
rx_bytes: 1156760390227
tx_bytes: 555185499993
rx_broadcast: 1884390
tx_broadcast: 61989
rx_multicast: 1085635
tx_multicast: 4
multicast: 1085635
collisions: 0
rx_crc_errors: 0
rx_no_buffer_count: 0
rx_missed_errors: 0
tx_aborted_errors: 0
tx_carrier_errors: 0
tx_window_errors: 0
tx_abort_late_coll: 0
tx_deferred_ok: 0
tx_single_coll_ok: 0
tx_multi_coll_ok: 0
tx_timeout_count: 0
rx_long_length_errors: 0
rx_short_length_errors: 0
rx_align_errors: 0
tx_tcp_seg_good: 0
tx_tcp_seg_failed: 0
rx_flow_control_xon: 0
rx_flow_control_xoff: 0
tx_flow_control_xon: 0
tx_flow_control_xoff: 0
rx_long_byte_count: 1156760390227
tx_dma_out_of_sync: 0
lro_aggregated: 0
lro_flushed: 0
tx_smbus: 0
rx_smbus: 0
dropped_smbus: 0
os2bmc_rx_by_bmc: 0
os2bmc_tx_by_bmc: 0
os2bmc_tx_by_host: 0
os2bmc_rx_by_host: 0
tx_hwtstamp_timeouts: 0
rx_hwtstamp_cleared: 0
rx_errors: 0
tx_errors: 0
tx_dropped: 0
rx_length_errors: 0
rx_over_errors: 0
rx_frame_errors: 0
rx_fifo_errors: 29856
tx_fifo_errors: 0
tx_heartbeat_errors: 0
tx_queue_0_packets: 326309642
tx_queue_0_bytes: 129405981266
tx_queue_0_restart: 24602
tx_queue_1_packets: 312273797
tx_queue_1_bytes: 128452561252
tx_queue_1_restart: 29415
tx_queue_2_packets: 346201163
tx_queue_2_bytes: 142140083212
tx_queue_2_restart: 37438
tx_queue_3_packets: 330442145
tx_queue_3_bytes: 144269755320
tx_queue_3_restart: 37282
rx_queue_0_packets: 425435806
rx_queue_0_bytes: 289861819616
rx_queue_0_drops: 6777
rx_queue_0_csum_err: 0
rx_queue_0_alloc_failed: 0
rx_queue_1_packets: 464596030
rx_queue_1_bytes: 320536125686
rx_queue_1_drops: 7777
rx_queue_1_csum_err: 0
rx_queue_1_alloc_failed: 0
rx_queue_2_packets: 392166712
rx_queue_2_bytes: 238191335648
rx_queue_2_drops: 7731
rx_queue_2_csum_err: 0
rx_queue_2_alloc_failed: 0
rx_queue_3_packets: 441005609
rx_queue_3_bytes: 294379528280
rx_queue_3_drops: 7571
rx_queue_3_csum_err: 0
rx_queue_3_alloc_failed: 0
[Expert@:0]# mq_mng -o -v -a
Total 32 cores. Multiqueue 4 cores: 0,16,1,17
i/f type state mode cores
------------------------------------------------------------------------------------------------
Mgmt igb Up Off 0(58)
Sync igb Down Auto
eth1-01 igb Down Auto
eth1-02 igb Down Auto
eth1-03 igb Up Auto (4/4) 0(67),16(68),1(69),17(70)
eth1-04 igb Down Auto
eth1-05 igb Down Auto
eth1-06 igb Up Auto (4/4) 0(74),16(75),1(76),17(77)
eth1-07 igb Up Auto (4/4) 0(79),16(80),1(81),17(85)
eth1-08 igb Up Auto (4/4) 0(91),16(92),1(93),17(94)
eth2-01 ixgbe Up Auto (4/4) 0(95),16(96),1(97),17(98)
eth2-02 ixgbe Up Auto (4/4) 0(100),16(101),1(102),17(103)
eth2-03 ixgbe Up Auto (4/4) 0(105),16(108),1(109),17(110)
eth2-04 ixgbe Up Auto (4/4) 0(112),16(113),1(114),17(115)
eth3-01 ixgbe Down Auto
eth3-02 ixgbe Down Auto

core interfaces queue irq rx packets tx packets
------------------------------------------------------------------------------------------------
0 eth2-01 eth2-01-TxRx-0 95 391755486 526257210
eth2-03 eth2-03-TxRx-0 105 149460 223506
eth2-02 eth2-02-TxRx-0 100 1281476432 1352246306
eth1-08 eth1-08-TxRx-0 91 2030545 10504972
eth2-04 eth2-04-TxRx-0 112 1046597577 1532525455
Mgmt Mgmt-TxRx-0 58 630571859 701777770
eth1-06 eth1-06-TxRx-0 74 34290651 43936538
eth1-07 eth1-07-TxRx-0 79 43110117 108583315
eth1-03 eth1-03-TxRx-0 67 425444673 326316946
1 eth2-01 eth2-01-TxRx-2 97 299676763 1152643147
eth2-03 eth2-03-TxRx-2 109 5959154 77952
eth2-02 eth2-02-TxRx-2 102 1195150233 1274034340
eth1-08 eth1-08-TxRx-2 93 21901068 5635461
eth2-04 eth2-04-TxRx-2 114 2296817063 835793263
eth1-06 eth1-06-TxRx-2 76 31729108 51386574
eth1-07 eth1-07-TxRx-2 81 39356106 93232418
eth1-03 eth1-03-TxRx-2 69 392179696 346210147
16 eth2-01 eth2-01-TxRx-1 96 402867786 473040336
eth2-03 eth2-03-TxRx-1 108 50962712 124307865
eth2-02 eth2-02-TxRx-1 101 1354595663 1473063739
eth1-08 eth1-08-TxRx-1 92 2369115 1812140
eth2-04 eth2-04-TxRx-1 113 1086886130 1014894149
eth1-06 eth1-06-TxRx-1 75 13455340 49546067
eth1-07 eth1-07-TxRx-1 80 41770919 92533996
eth1-03 eth1-03-TxRx-1 68 464611903 312277386
17 eth2-01 eth2-01-TxRx-3 98 371811514 377583107
eth2-03 eth2-03-TxRx-3 110 95083 39849
eth2-02 eth2-02-TxRx-3 103 1287785027 1380370819
eth1-08 eth1-08-TxRx-3 94 3988027 1514479
eth2-04 eth2-04-TxRx-3 115 1133430146 960761347
eth1-06 eth1-06-TxRx-3 77 53846334 61167156
eth1-07 eth1-07-TxRx-3 85 43126713 103445698
eth1-03 eth1-03-TxRx-3 70 441012962 330455521

0 Kudos
Timothy_Hall
Legend Legend
Legend

tx_queue_0_restart: 24602

tx_queue_1_restart: 29415

tx_queue_2_restart: 37438

tx_queue_3_restart: 37282

Well that explains the TX-DRPs, it is just odd to bottleneck like that on the TX side of the interface instead of the RX.  Almost like there is crapload of traffic flooding into eth1-03.581 from multiple other interfaces and the TX ring buffer is filling up.  Recommendations:

1) Change static split from 4/28 to 8/24 or enable Dynamic Balancing/Split which should increase interface queues from 4 to 8 assuming the NIC hardware supports that many, the igb driver supports up to 16.

2) If TX-DRPs/restarts persist after split change consider bonding two physical interfaces with 802.3ad to replace physical interface eth1-03; it looks like you have some unused interfaces available.  Also possible that flow control is enabled on the switchport attached to eth1-03 and it is not keeping up and sending pause frames, thus causing the TX queue restarts on the firewall.

3) Also in general make sure that destination is object "Internet" and not "Any" in any rules/layers that are implementing APCL/URLF to keep non-Internet traffic from getting pulled into the Medium Path inappropriately.

Other than those items things look pretty good.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
AaronCP
Advisor

Thanks, @Timothy_Hall!

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events