Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Khalid_Aftas
Contributor

VSX Performance issue on r80.30 take 219 (3.10)

Hi CheckMates,

 

I'm encountering an issue since 4 months, after a migration from old 14000 boxes running on r77.30 to 15000series on r80.30.

 

CPU cores were saturation could be seen imediatly after the migration, 4 Months of investigation with TAC, first issue identified was a bug that most of tcp 443/445 tarfic was taking the medium path, we used fast accel feature to staticly accelerate it, this bug should have been fixed in JHF 219, and indeed it seems to as we dont use fast accel today.

 

But the performance is still an issue,  our FWKs are maxed out, and thus all Cores related during bussiness day.

 

Technical infos  :

VSX cluster running r80.30 take 219 new kernel.

Blades : FW + IPS

 

Stats :



[Expert@EU93A0391-OSS:1]# fwaccel stats -s
Accelerated conns/Total conns : 27355/185090 (14%)
Accelerated pkts/Total pkts : 981191859249/1466990930587 (66%)
F2Fed pkts/Total pkts : 8887485442/1466990930587 (0%)
F2V pkts/Total pkts : 2802683200/1466990930587 (0%)
CPASXL pkts/Total pkts : 0/1466990930587 (0%)
PSLXL pkts/Total pkts : 476911585896/1466990930587 (32%)
QOS inbound pkts/Total pkts : 0/1466990930587 (0%)
QOS outbound pkts/Total pkts : 0/1466990930587 (0%)
Corrected pkts/Total pkts : 0/1466990930587 (0%)

 

[Expert@xxxxxxx:0]# fw ctl affinity -l
eth3-01: CPU 0
eth3-02: CPU 0
Mgmt: CPU 0
VS_0: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31
VS_0 fwk: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31
VS_1: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31
VS_1 fwk: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31
VS_1 pepd: CPU 16
VS_1 pdpd: CPU 17
VS_2: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31
VS_2 fwk: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31
VS_3: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31
VS_3 fwk: CPU 4 5 6 7 8 9 10 11 12 13 14 15 20 21 22 23 24 25 26 27 28 29 30 31
Interface eth1-01: has multi queue enabled
Interface eth1-02: has multi queue enabled

 


[Expert@EU93A0391-OSS:1]# fw ctl multik stat
ID | Active | CPU | Connections | Peak
----------------------------------------------
0 | Yes | 4-15+ | 11859 | 16320
1 | Yes | 4-15+ | 10365 | 14918
2 | Yes | 4-15+ | 11278 | 14150
3 | Yes | 4-15+ | 10394 | 14274
4 | Yes | 4-15+ | 9037 | 12512
5 | Yes | 4-15+ | 9715 | 14493
6 | Yes | 4-15+ | 10179 | 13757
7 | Yes | 4-15+ | 10707 | 15184
8 | Yes | 4-15+ | 9731 | 13823
9 | Yes | 4-15+ | 10668 | 12959
10 | Yes | 4-15+ | 10530 | 15033
11 | Yes | 4-15+ | 9930 | 14691
12 | Yes | 4-15+ | 1673 | 13807
13 | Yes | 4-15+ | 9023 | 12942
14 | Yes | 4-15+ | 8952 | 14260
15 | Yes | 4-15+ | 9620 | 13936
16 | Yes | 4-15+ | 10442 | 13535
17 | Yes | 4-15+ | 10177 | 13863
18 | Yes | 4-15+ | 2597 | 13612
19 | Yes | 4-15+ | 9941 | 13543
20 | Yes | 4-15+ | 7191 | 14152
21 | Yes | 4-15+ | 9821 | 12975
22 | Yes | 4-15+ | 9721 | 12999
23 | Yes | 4-15+ | 6742 | 13279

 

 
 

Main VS  runing 18 fwk instances.

 

All cpu cores handeling VS traffics are on avg 80% usage, and most of them maxing out during bussiness hours.

 

We have no lead atm with TAC (prety slow response, and knowledge to be honest).

 

 

 

0 Kudos
11 Replies
Timothy_Hall
Champion
Champion

In your case the high CPU may be caused by two things:

1) IPS - to identify if this is where you need to focus your efforts, try temporarily disabling IPS in all VS's and see how that impacts CPU usage in the fwk's.  If it reduces it substantially you need to tune your IPS config.

2) Rulebase lookup overhead - your connections templating rate is somewhat low (14%), please post output of fwaccel stat and try to make sure templating is enabled as far as possible in your rulebase.

Also provide output of netstat -ni to ensure network is running cleanly, and free -m to ensure box is not low on memory and swapping.

Tagging @Kaspars_Zibarts as well.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
Khalid_Aftas
Contributor

Thanks a lot Timothy

 

1) IPS : i disabled ips via cli "ips off -n" on the main VS, waited 30min to see, and no improvement

 

2) Rulebase : we are running the same rulebase as in 77.30, it is quiet big one (4000+ rules), there is a separate project to segment/clean it, but i would assume that 80.30 should handel it ?

Output requested :

[Expert@EU93A0391-OSS:1]# fwaccel stat
+-----------------------------------------------------------------------------+
|Id|Name |Status |Interfaces |Features |
+-----------------------------------------------------------------------------+
|0 |SND |enabled |eth3-01,eth3-02,eth1-01, |
| | | |eth1-02 |Acceleration,Cryptography |
| | | | |Crypto: Tunnel,UDPEncap,MD5, |
| | | | |SHA1,NULL,3DES,DES,CAST, |
| | | | |CAST-40,AES-128,AES-256,ESP, |
| | | | |LinkSelection,DynamicVPN, |
| | | | |NatTraversal,AES-XCBC,SHA256 |
+-----------------------------------------------------------------------------+

Accept Templates : disabled by Firewall
Layer Policy-DC-IDMZ Security disables template offloads from rule #175
Throughput acceleration still enabled.
Drop Templates : enabled
NAT Templates : disabled by Firewall
Layer Policy-DC-IDMZ Security disables template offloads from rule #175
Throughput acceleration still enabled.

 


[Expert@EU93A0391-OSS:1]# netstat -ni
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
bond5.401 1500 0 439555314550 0 0 0 257437198770 0 0 0 BMRU
bond5.948 1500 0 18722898 0 0 0 6879533 0 0 0 BMRU
bond5.1500 1500 0 0 0 0 0 28447 0 0 0 BMRU
bond5.1600 1500 0 19704717405 0 0 0 4756127947 0 0 0 BMRU
bond5.1610 1500 0 5019853613 0 0 0 18546394916 0 0 0 BMRU
bond5.1620 1500 0 1520265560 0 0 0 495719503 0 0 0 BMRU
bond5.1630 1500 0 552798593 0 0 0 1794900083 0 0 0 BMRU
bond5.1640 1500 0 91729713 0 0 0 92326654 0 0 0 BMRU
bond5.1650 1500 0 223597930 0 0 0 1329620910 0 0 0 BMRU
bond5.1660 1500 0 81875355 0 0 0 77538102 0 0 0 BMRU
bond5.1670 1500 0 141627455 0 0 0 94948303 0 0 0 BMRU
bond5.1680 1500 0 2824010896 0 0 0 8154361713 0 0 0 BMRU
bond5.1690 1500 0 849356879 0 0 0 340910789 0 0 0 BMRU
bond5.1691 1500 0 187572956 0 0 0 27545569 0 0 0 BMRU
bond5.1692 1500 0 353739673 0 0 0 395304253 0 0 0 BMRU
bond5.1760 1500 0 293278479 0 0 0 90423211 0 0 0 BMRU
bond5.1770 1500 0 62209259588 0 0 0 34204939878 0 0 0 BMRU
bond5.1780 1500 0 35720 0 0 0 26746 0 0 0 BMRU
bond5.2100 1500 0 780575632 0 0 0 441135663 0 0 0 BMRU
bond5.2102 1500 0 14 0 0 0 27556 0 0 0 BMRU
bond5.2103 1500 0 2186828356 0 1765878 0 2383220866 0 0 0 BMRU
bond5.2107 1500 0 429641284 0 1394 0 404485052 0 0 0 BMRU
bond5.2111 1500 0 76241432 0 4192 0 131623315 0 0 0 BMRU
bond5.2113 1500 0 260697918 0 0 0 151394245 0 0 0 BMRU
bond5.2115 1500 0 176 0 0 0 514948 0 0 0 BMRU
bond5.2116 1500 0 47740157462 0 145183 0 43532801495 0 0 0 BMRU
bond5.2117 1500 0 88470001929 0 30100 0 349959453544 0 0 0 BMRU
bond5.2122 1500 0 1400579562 0 7632 0 1538412610 0 0 0 BMRU
bond5.2123 1500 0 579411 0 684 0 2732185 0 0 0 BMRU
bond5.2124 1500 0 114179992 0 110992 0 126049379 0 0 0 BMRU
bond5.2127 1500 0 2345030293 0 1561 0 29915630 0 0 0 BMRU
bond5.2128 1500 0 1668734424 0 101522 0 1319899210 0 0 0 BMRU
bond5.2129 1500 0 14 0 0 0 26746 0 0 0 BMRU
bond5.2131 1500 0 29 0 0 0 30292 0 0 0 BMRU
bond5.2132 1500 0 1149982998 0 3124 0 1147661871 0 0 0 BMRU
bond5.2133 1500 0 361757781 0 36219 0 365482523 0 0 0 BMRU
bond5.2134 1500 0 3010563555 0 103765 0 3513107603 0 0 0 BMRU
bond5.2135 1500 0 34461707 0 5177 0 38349343 0 0 0 BMRU
bond5.2136 1500 0 2761422913 0 45039 0 4404252449 0 0 0 BMRU
bond5.2137 1500 0 797890 0 0 0 26746 0 0 0 BMRU
bond5.2138 1500 0 3593125 0 1864 0 5883503 0 0 0 BMRU
bond5.2139 1500 0 23456805357 0 3536643 0 26989917876 0 0 0 BMRU
bond5.2140 1500 0 201587474 0 754 0 364655031 0 0 0 BMRU
bond5.2144 1500 0 154414820 0 29038 0 159334409 0 0 0 BMRU
bond5.2145 1500 0 37613516 0 9563 0 37515140 0 0 0 BMRU
bond5.2146 1500 0 249394262 0 1963 0 220754407 0 0 0 BMRU
bond5.2147 1500 0 6596859 0 633 0 8095319 0 0 0 BMRU
bond5.2148 1500 0 29853400 0 699 0 12331517 0 0 0 BMRU
bond5.2150 1500 0 749401 0 0 0 26797 0 0 0 BMRU
bond5.2151 1500 0 1222499524 0 0 0 1092391920 0 0 0 BMRU
bond5.2152 1500 0 584696119 0 0 0 609351194 0 0 0 BMRU
bond5.2153 1500 0 298373 0 0 0 26746 0 0 0 BMRU
bond5.2154 1500 0 26856251645 0 0 0 24230177669 0 0 0 BMRU
bond5.2156 1500 0 794499 0 0 0 26746 0 0 0 BMRU
bond5.2159 1500 0 816268 0 0 0 1408643 0 0 0 BMRU
bond5.2160 1500 0 192393432 0 1258 0 161404518 0 0 0 BMRU
bond5.2161 1500 0 1178119490 0 8 0 382957444 0 0 0 BMRU
bond5.2162 1500 0 675023067 0 1256 0 589829812 0 0 0 BMRU
bond5.2163 1500 0 2339131127 0 0 0 918364517 0 0 0 BMRU
bond5.2166 1500 0 299501412 0 20076 0 287746578 0 0 0 BMRU
bond5.2167 1500 0 5986133952 0 425775 0 13625094296 0 0 0 BMRU
bond5.2168 1500 0 596744 0 645 0 1423609 0 0 0 BMRU
bond5.2170 1500 0 1451549 0 656 0 2563649 0 0 0 BMRU
bond5.2171 1500 0 5262270 0 0 0 4164222 0 0 0 BMRU
bond5.2172 1500 0 14 0 0 0 26746 0 0 0 BMRU
bond5.2173 1500 0 41572185 0 1406 0 55701109 0 0 0 BMRU
bond5.2174 1500 0 61123605 0 0 0 41553070 0 0 0 BMRU
bond5.2175 1500 0 31794091 0 3213 0 69368598 0 0 0 BMRU
bond5.2176 1500 0 104855617 0 5319 0 110506388 0 0 0 BMRU
bond5.2177 1500 0 2226770473 0 2994 0 4925383331 0 0 0 BMRU
bond5.2178 1500 0 20677968466 0 590155 0 37310856614 0 0 0 BMRU
bond5.2179 1500 0 637946479 0 772 0 3056028089 0 0 0 BMRU
bond5.2181 1500 0 65646804 0 0 0 5012538 0 0 0 BMRU
bond5.2182 1500 0 17445607 0 0 0 17839615 0 0 0 BMRU
bond5.2183 1500 0 14 0 0 0 26746 0 0 0 BMRU
bond5.2184 1500 0 14 0 0 0 26746 0 0 0 BMRU
bond5.2185 1500 0 14 0 0 0 26746 0 0 0 BMRU
bond5.2186 1500 0 14 0 0 0 26746 0 0 0 BMRU
bond5.2187 1500 0 19021155 0 807 0 23059082 0 0 0 BMRU
bond5.2188 1500 0 16740545 0 825 0 13776664 0 0 0 BMRU
bond5.2189 1500 0 139258849 0 1083 0 47397173 0 0 0 BMRU
bond5.2190 1500 0 1715228 0 617 0 1440492 0 0 0 BMRU
bond5.2193 1500 0 5832067655 0 0 0 2659849948 0 0 0 BMRU
bond5.2194 1500 0 792529249 0 6 0 123736423 0 0 0 BMRU
bond5.2195 1500 0 35423574 0 798 0 31496471 0 0 0 BMRU
bond5.2196 1500 0 25058549 0 833 0 17697807 0 0 0 BMRU
bond5.2198 1500 0 199847301 0 1231 0 104357281 0 0 0 BMRU
bond5.2199 1500 0 9470521 0 740 0 9619069 0 0 0 BMRU
bond5.2213 1500 0 1052693731 0 97378 0 868371511 0 0 0 BMRU
bond5.2214 1500 0 2156084706 0 9979 0 1907390609 0 0 0 BMRU
bond5.2215 1500 0 4453200104 0 10031 0 4134079969 0 0 0 BMRU
bond5.2216 1500 0 5084653877 0 37634 0 4072045428 0 0 0 BMRU
bond5.2217 1500 0 9750016673 0 5650 0 8115083087 0 0 0 BMRU
bond5.2219 1500 0 1593491 0 1338 0 3735884 0 0 0 BMRU
bond5.2222 1500 0 73782280360 0 0 0 56292910658 0 0 0 BMRU
bond5.2224 1500 0 17768206 0 754 0 17696308 0 0 0 BMRU
bond5.2225 1500 0 292379821 0 1936 0 375455246 0 0 0 BMRU
bond5.2226 1500 0 6181747 0 5064 0 13853282 0 0 0 BMRU
bond5.2227 1500 0 5942399460 0 0 0 2670597905 0 0 0 BMRU
bond5.2228 1500 0 17930046 0 6331 0 21380554 0 0 0 BMRU
bond5.2229 1500 0 18739721 0 855 0 32815209 0 0 0 BMRU
bond5.2230 1500 0 1651833 0 29058 0 3605339 0 0 0 BMRU
bond5.2231 1500 0 1529707 0 55977 0 3821086 0 0 0 BMRU
bond5.2232 1500 0 2404849 0 84181 0 5909107 0 0 0 BMRU
bond5.2233 1500 0 14 0 0 0 26746 0 0 0 BMRU
bond5.2235 1500 0 160873543 0 770 0 168940118 0 0 0 BMRU
bond5.2236 1500 0 1478376616 0 0 0 1413058691 0 0 0 BMRU
bond5.2237 1500 0 21208641021 0 435 0 1588532538 0 0 0 BMRU
bond5.2239 1500 0 14157374631 0 393 0 1099860052 0 0 0 BMRU
bond5.2240 1500 0 4679360634 0 2882 0 4553414777 0 0 0 BMRU
bond5.2241 1500 0 11575305 0 1394 0 5956715 0 0 0 BMRU
bond5.2242 1500 0 3537749319 0 208 0 3362137462 0 0 0 BMRU
bond5.2244 1500 0 895454 0 571 0 1655734 0 0 0 BMRU
bond5.2245 1500 0 1415644 0 823 0 2192337 0 0 0 BMRU
bond5.2246 1500 0 30474841207 0 164321 0 16301098016 0 0 0 BMRU
bond5.2247 1500 0 10382442501 0 53764 0 5363982512 0 0 0 BMRU
bond5.2248 1500 0 2097582 0 2162 0 5736814 0 0 0 BMRU
bond5.2249 1500 0 14 0 0 0 26746 0 0 0 BMRU
bond5.2252 1500 0 629377 0 740 0 1652008 0 0 0 BMRU
bond5.2253 1500 0 579918380 0 849 0 239511579 0 0 0 BMRU
bond5.2254 1500 0 266302766 0 1988 0 600819980 0 0 0 BMRU
bond5.2260 1500 0 882595 0 588 0 2039218 0 0 0 BMRU
bond5.2261 1500 0 973295 0 684 0 1994880 0 0 0 BMRU
bond5.2262 1500 0 14 0 0 0 26746 0 0 0 BMRU
bond5.2263 1500 0 697306449 0 7806 0 516001193 0 0 0 BMRU
bond5.2265 1500 0 1286938 0 8050 0 1210043 0 0 0 BMRU
bond5.2266 1500 0 1031939 0 28303 0 2162165 0 0 0 BMRU
bond5.2267 1500 0 19418775 0 0 0 742892 0 0 0 BMRU
bond5.2268 1500 0 308672260 0 0 0 114853234 0 0 0 BMRU
bond5.2269 1500 0 187147005 0 0 0 139799886 0 0 0 BMRU
bond5.2270 1500 0 123163467 0 2153 0 134900781 0 0 0 BMRU
bond5.2271 1500 0 1934791 0 1055 0 2861251 0 0 0 BMRU
bond5.2272 1500 0 1610975970 0 0 0 825129292 0 0 0 BMRU
bond5.2273 1500 0 929357930 0 0 0 1145801265 0 0 0 BMRU
bond5.2274 1500 0 1307668 0 698 0 1644492 0 0 0 BMRU
bond5.2275 1500 0 1595650 0 712 0 2091858 0 0 0 BMRU
bond5.2276 1500 0 867119 0 762 0 1225813 0 0 0 BMRU
bond5.2277 1500 0 3746523 0 1250 0 4994907 0 0 0 BMRU
bond5.2278 1500 0 4013456 0 1821 0 4138306 0 0 0 BMRU
bond5.2279 1500 0 20204308 0 3104 0 7746010 0 0 0 BMRU
bond5.2280 1500 0 263184702 0 96856 0 61195444 0 0 0 BMRU
bond5.2283 1500 0 10909944 0 3281 0 11572570 0 0 0 BMRU
bond5.2284 1500 0 74248379 0 4610 0 36568516 0 0 0 BMRU
bond5.2285 1500 0 27528449 0 4559 0 26128790 0 0 0 BMRU
bond5.2286 1500 0 1559113 0 1377 0 778844 0 0 0 BMRU
bond5.2288 1500 0 224198938 0 6186 0 429506348 0 0 0 BMRU
bond5.2289 1500 0 103129184 0 34 0 22523139 0 0 0 BMRU
bond5.2290 1500 0 5827304 0 43 0 6834018 0 0 0 BMRU
bond5.2291 1500 0 789452 0 665 0 1746901 0 0 0 BMRU
bond5.2292 1500 0 3618501 0 1339 0 4996305 0 0 0 BMRU
bond5.2294 1500 0 7932890519 0 0 0 6628345176 0 0 0 BMRU
bond5.2296 1500 0 14 0 0 0 26746 0 0 0 BMRU
bond5.2297 1500 0 11489393 0 643 0 191288 0 0 0 BMRU
bond5.2298 1500 0 0 0 0 0 0 0 0 0 BMRU
bond5.2299 1500 0 0 0 0 0 0 0 0 0 BMRU
bond5.2300 1500 0 22123629562 0 37 0 22922508494 0 0 0 BMRU
bond5.2310 1500 0 195299128 0 0 0 188572828 0 0 0 BMRU
bond5.2311 1500 0 66983363 0 0 0 74283710 0 0 0 BMRU
bond5.2312 1500 0 225225167 0 0 0 140419517 0 0 0 BMRU
bond5.2313 1500 0 4868684 0 0 0 667863 0 0 0 BMRU
bond5.2314 1500 0 2380808 0 0 0 1762732 0 0 0 BMRU
bond5.2315 1500 0 14 0 0 0 26746 0 0 0 BMRU
bond5.2316 1500 0 1078240346 0 5941 0 1547178553 0 0 0 BMRU
bond5.2501 1500 0 69153285 0 0 0 31333069 0 0 0 BMRU
bond5.2511 1500 0 14 0 0 0 26746 0 0 0 BMRU
bond5.2512 1500 0 575959 0 830 0 1034694 0 0 0 BMRU
bond5.2991 1500 0 4284698 0 594 0 4307162 0 0 0 BMRU
lo 65536 0 6406582 0 0 0 6406582 0 0 0 ALMdORU

 

[Expert@EU93A0391-OSS:1]# free -m
total used free shared buff/cache available
Mem: 63952 12921 42358 14 8672 48981
Swap: 32767 0 32767

 

 

0 Kudos
Kaspars_Zibarts
Employee Employee
Employee

re-run your netstat command on VS0 instead

 

0 Kudos
Kaspars_Zibarts
Employee Employee
Employee

How's the performance of SND cores btw, they are not overloaded?

Reading between lines your MQ is configured for 0,1,16,17 cores? Could you send output of mq_mgt --show

BTW, this config below is wrong as I suspect that cores 16 and 17 are used for MQ:

VS_1 pepd: CPU 16
VS_1 pdpd: CPU 17

Can you also share output of top, not the first run though so we get realistic figures

How many FWK instances do you have configured for each VS? VS1 is obviously 24.

Can you share cpview output for the busy VS (throughput, concurrent connections, CPS and PPS figures from network part)

Khalid_Aftas
Contributor

MQ does indeed use 16 17, those cores barely do anything, we also appointed pepd to them (barely used too)

VS1 is the main VS (DataCenter core) others VSs are for testing/lab, with very low traffic.

For the TOP stats (i will summarize it in stead of pasting, as atm there is a workaround in place, more info below 😞

Most CPU cores assigned to VS1 were high, and 4 of them always maxing out

FWKs for vs 1 were also maxing out.

I enabled back fast accel feature to force acceleration for some traffic (mainly https/smb) as we had as a workaround before (bug before JHF 219 where https/smb among others are not accelerated) and there is a visible improvement, 25% less usage right away.

I'm again suspecting the way this kind of traffic is being handeld on this version.

 

Is it possible to dump (via linux) the content of the traffic being handeled by a specific fwk process on vsx ? in the past CP engineer was able to do so but i don't find the commands.

0 Kudos
Kaspars_Zibarts
Employee Employee
Employee

it would still help to see top output, don't need all of it but say top 5-10 processes

0 Kudos
Khalid_Aftas
Contributor

Before fastaccel workaround, 6 of fwk were maxing out 99.99%, and cpu core maxing out also (fliping from one to another)

 

top - 15:59:45 up 18 days, 16:17, 1 user, load average: 17.60, 17.78, 18.18
Tasks: 546 total, 2 running, 544 sleeping, 0 stopped, 0 zombie
%Cpu0 : 0.0 us, 1.6 sy, 0.0 ni, 82.0 id, 0.0 wa, 0.0 hi, 16.4 si, 0.0 st
%Cpu1 : 0.0 us, 0.0 sy, 0.0 ni, 80.0 id, 0.0 wa, 0.0 hi, 20.0 si, 0.0 st
%Cpu2 : 0.0 us, 0.0 sy, 0.0 ni, 84.7 id, 0.0 wa, 0.0 hi, 15.3 si, 0.0 st
%Cpu3 : 0.0 us, 1.9 sy, 0.0 ni, 43.4 id, 0.0 wa, 0.0 hi, 54.7 si, 0.0 st
%Cpu4 : 45.5 us, 13.9 sy, 0.0 ni, 40.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu5 : 86.3 us, 12.7 sy, 0.0 ni, 1.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu6 : 32.0 us, 6.0 sy, 0.0 ni, 62.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu7 : 50.5 us, 13.9 sy, 0.0 ni, 35.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu8 : 49.5 us, 10.7 sy, 0.0 ni, 39.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu9 : 37.8 us, 10.2 sy, 0.0 ni, 52.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu10 : 43.1 us, 9.8 sy, 0.0 ni, 47.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu11 : 40.4 us, 10.1 sy, 0.0 ni, 49.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu12 : 48.0 us, 10.2 sy, 0.0 ni, 41.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu13 : 44.4 us, 11.1 sy, 0.0 ni, 44.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu14 : 44.4 us, 11.1 sy, 0.0 ni, 44.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu15 : 36.4 us, 10.1 sy, 0.0 ni, 53.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu16 : 0.0 us, 2.9 sy, 0.0 ni, 97.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu17 : 2.0 us, 4.0 sy, 0.0 ni, 94.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu18 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu19 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu20 : 45.0 us, 13.0 sy, 0.0 ni, 42.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu21 : 45.5 us, 5.0 sy, 0.0 ni, 49.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu22 : 92.2 us, 7.8 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu23 : 56.6 us, 9.1 sy, 0.0 ni, 34.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu24 : 41.6 us, 6.9 sy, 0.0 ni, 51.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu25 : 35.4 us, 8.1 sy, 0.0 ni, 56.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu26 : 44.0 us, 6.0 sy, 0.0 ni, 50.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu27 : 37.0 us, 7.0 sy, 0.0 ni, 56.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu28 : 49.5 us, 7.9 sy, 0.0 ni, 42.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu29 : 40.4 us, 10.1 sy, 0.0 ni, 49.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu30 : 37.4 us, 7.1 sy, 0.0 ni, 55.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu31 : 36.1 us, 7.2 sy, 0.0 ni, 56.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 65487764 total, 43391232 free, 13066104 used, 9030428 buff/cache
KiB Swap: 33554300 total, 33554300 free, 0 used. 50347312 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND
80054 admin 0 -20 11.638g 7.949g 1.257g R 99.0 12.7 15164:03 5 fwk1_18
80048 admin 0 -20 11.638g 7.949g 1.257g R 98.1 12.7 12480:46 23 fwk1_12
80037 admin 0 -20 11.638g 7.949g 1.257g R 81.0 12.7 12417:08 30 fwk1_2
80039 admin 0 -20 11.638g 7.949g 1.257g R 76.2 12.7 12754:39 11 fwk1_4
80038 admin 0 -20 11.638g 7.949g 1.257g R 74.3 12.7 12898:10 7 fwk1_3
80050 admin 0 -20 11.638g 7.949g 1.257g R 72.4 12.7 11821:57 6 fwk1_14
80041 admin 0 -20 11.638g 7.949g 1.257g R 71.4 12.7 11956:38 13 fwk1_6
80043 admin 0 -20 11.638g 7.949g 1.257g R 71.4 12.7 12132:27 9 fwk1_8
80053 admin 0 -20 11.638g 7.949g 1.257g R 70.5 12.7 10948:32 14 fwk1_17
80058 admin 0 -20 11.638g 7.949g 1.257g S 69.5 12.7 10348:42 4 fwk1_22
80044 admin 0 -20 11.638g 7.949g 1.257g R 67.6 12.7 11733:59 4 fwk1_9
80049 admin 0 -20 11.638g 7.949g 1.257g R 67.6 12.7 11876:22 31 fwk1_13
81284 admin 20 0 884920 378136 42140 R 67.6 0.6 10958:41 20 fw_full
80036 admin 0 -20 11.638g 7.949g 1.257g R 66.7 12.7 11650:54 12 fwk1_1
80040 admin 0 -20 11.638g 7.949g 1.257g R 66.7 12.7 12721:16 10 fwk1_5
80035 admin 0 -20 11.638g 7.949g 1.257g S 65.7 12.7 10577:59 25 fwk1_0
80045 admin 0 -20 11.638g 7.949g 1.257g R 64.8 12.7 12111:29 21 fwk1_10
80046 admin 0 -20 11.638g 7.949g 1.257g R 64.8 12.7 12112:36 15 fwk1_11
80059 admin 0 -20 11.638g 7.949g 1.257g S 64.8 12.7 10223:50 28 fwk1_23
80055 admin 0 -20 11.638g 7.949g 1.257g S 63.8 12.7 11010:59 24 fwk1_19
80051 admin 0 -20 11.638g 7.949g 1.257g R 61.9 12.7 11766:18 15 fwk1_15
80052 admin 0 -20 11.638g 7.949g 1.257g R 61.9 12.7 10741:05 27 fwk1_16
80042 admin 0 -20 11.638g 7.949g 1.257g R 60.0 12.7 11906:00 22 fwk1_7
80056 admin 0 -20 11.638g 7.949g 1.257g S 60.0 12.7 10474:53 14 fwk1_20
80057 admin 0 -20 11.638g 7.949g 1.257g S 56.2 12.7 11162:50 26 fwk1_21

0 Kudos
Khalid_Aftas
Contributor

after 1 day of having fast accel staticly accelerating some traffic (mainly smb/https) the situation is way better on the cpu, 25% decrease overall.

So it means that this heavy traffic (Datacenter core, a lot of file transfer, and web) is not correctly handeled by r80.30 code ?

Workaround is acceptable in short term only, as we loose all the security with this bypass.

0 Kudos
Khalid_Aftas
Contributor

@Timothy_Hall @Kaspars_Zibarts  

 

I would like to get your view on the "lead" in the policy, because TAC again is coming back to it.

To the best of my knowledge, there was a major improvement in performance of rulebase matching from77.30 to 80.30, and the hardware appliance is 2x more powerfull, we would at least could pretend to have the same level of performance at worste, but triple that ?

The proposal to clean/re-arrange rulebase (3000 rules) for SXL templates - > moving rules around the rulebase has absolutely no impact on which path (SXL, PXL, F2F) the traffic takes, it only affects the formation of SecureXL Accept templates, we were runing the same policy on old hardware/77.30 without issues since 7 years.

Reworking the policy is a project of it's own that would take a year (carefull process of decom) seeing the criticality of that FW.

0 Kudos
Timothy_Hall
Champion
Champion

Since the fast_accel workaround, what does the distribution of traffic in the various processing paths look like now?  fwaccel stats -s

Please provide output of netstat -ni run from VS0.

Also assuming you only have FW and IPS blades enabled, try completely disabling IPS by unchecking the box on all VS objects and see what happens to the acceleration statistics.  At that point practically all traffic should be fully accelerated other than rulebase lookups at the start of new connections in F2F.  If you are still experiencing a lot of PXL traffic a debug will be necessary to figure out why. 

If practically all traffic is accelerated yet your Firewall Workers are still showing high CPU, that would seem to indicate excessive rulebase lookup overhead.  What is the connections/sec reported on the Overview screen of cpview?

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
Khalid_Aftas
Contributor

Important note : i used fast accel with combination of source & destination & protocol for specifics known high connections, not all the traffic (that would kill the 4 SND), and saw that decrease immediatly (both performance & accel % to pslxl), but still 31% of traffic is going to PSLXL.

- IPS : full ips disabled on VS object & policy (fwaccel stats -r to reset counter) :

[Expert@EU93A0391-OSS:1]# fwaccel stats -s
Accelerated conns/Total conns : 18446744073709551356/18446744073709551512 (250%)
Accelerated pkts/Total pkts : 7153583/10400026 (68%)
F2Fed pkts/Total pkts : 87577/10400026 (0%)
F2V pkts/Total pkts : 21775/10400026 (0%)
CPASXL pkts/Total pkts : 0/10400026 (0%)
PSLXL pkts/Total pkts : 3158866/10400026 (30%)
QOS inbound pkts/Total pkts : 0/10400026 (0%)
QOS outbound pkts/Total pkts : 0/10400026 (0%)
Corrected pkts/Total pkts : 0/10400026 (0%)

 

2 Leads to deal with :

- What is this 30% going trough PSLXL ? - > i honestly believe that the issue of some traffic (445/443) not going to the correct path is still the issue here.

- Policy lookup overhead  ? - > seems very unlikely

 

In the past r&d was able to dump the content of linux process of the Fwk to pinpoint what was causing the high cpu, i hope the TAC will finaly involve them.

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events