Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
abihsot__
Advisor

ClusterXL failover due to restart of the Cluster module

Hi there,

 

Recently gateway started doing failovers and I can't figure out what is happening.

2x appliances in active/passive mode (clusterxl). R80.40 JHF89

I am kind of living with all those errors related to mux thing as I found in some other posts those are cosmetic etc and will be hidden from the logs in next JHF, but it never happened...

Any idea where to start digging?

 

Apr 22 14:51:55 2021 firewall kernel: [fw4_1];CLUS-120200-1: Starting CUL mode because CPU-06 usage (93%) on the local member increased above the configured threshold (80
%).
Apr 22 14:51:57 2021 firewall kernel: [fw4_1];CLUS-210300-1: Remote member 2 (state STANDBY -> DOWN) | Reason: Interface is down (Cluster Control Protocol packets are not
received)
Apr 22 14:51:57 2021 firewall kernel: [fw4_5];fwldbcast_handle_retrans_request: Updated bchosts_mask to 1
Apr 22 14:51:57 2021 firewall kernel: [fw4_1];fwldbcast_handle_retrans_request: Updated bchosts_mask to 1
Apr 22 14:51:57 2021 firewall kernel: [fw4_4];fwldbcast_handle_retrans_request: Updated bchosts_mask to 1
Apr 22 14:51:59 2021 firewall kernel: [fw4_1];CLUS-214802-1: Remote member 2 (state DOWN -> STANDBY) | Reason: There is already an ACTIVE member in the cluster
Apr 22 14:52:17 2021 firewall kernel: [SIM4];simi_reorder_enqueue_packet: reached the limit of maximum enqueued packets! conn:<x.x.x.x,43037,x.x.x.x,1603,17>, fw_k
ey:<x.x.x.x,43037,x.x.x.x,1603,17>, app: Route
Apr 22 14:52:20 2021 firewall kernel: [SIM4];simi_reorder_enqueue_packet: reached the limit of maximum enqueued packets! conn:<x.x.x.x,43037,x.x.x.x,1603,17>, fw_k
ey:<x.x.x.x,43037,x.x.x.x,1603,17>, app: Route
Apr 22 14:52:20 2021 firewall kernel: [fw4_1];ws_mux_host_only_active_finalize_read_handler: ERROR: stream[1] is empty. mux_stat=1.
Apr 22 14:52:20 2021 firewall kernel: [fw4_1];ws_mux_read_handler_from_main_ex: ERROR: Finalize callback failed. cdir=2, mux_stat=1.
Apr 22 14:52:20 2021 firewall kernel: [fw4_1];mux_task_handler: ERROR: Failed to handle task. task=ffffc900b93e2810, app_id=3 (WS), mux_state=ffff88031370b018, curr_side
0, prev_side 0.
Apr 22 14:52:20 2021 firewall kernel: [fw4_1];mux_read_handler: ERROR: Failed to handle task queue. mux_opaque=ffff88031370b018.
Apr 22 14:52:20 2021 firewall kernel: [fw4_1];mux_active_read_handler_cb: ERROR: Failed to forward data to Mux.
Apr 22 14:52:20 2021 firewall kernel: [fw4_5];ws_mux_host_only_active_finalize_read_handler: ERROR: stream[1] is empty. mux_stat=1.
Apr 22 14:52:20 2021 firewall kernel: [fw4_5];ws_mux_read_handler_from_main_ex: ERROR: Finalize callback failed. cdir=2, mux_stat=1.
Apr 22 14:52:20 2021 firewall kernel: [fw4_5];mux_task_handler: ERROR: Failed to handle task. task=ffffc900c1de2e88, app_id=3 (WS), mux_state=ffff88060a39d018, curr_side
0, prev_side 0.
Apr 22 14:52:21 2021 firewall kernel: [fw4_5];mux_read_handler: ERROR: Failed to handle task queue. mux_opaque=ffff88060a39d018.
Apr 22 14:52:21 2021 firewall kernel: [fw4_5];mux_active_read_handler_cb: ERROR: Failed to forward data to Mux.
Apr 22 14:52:28 2021 firewall kernel: [fw4_1];fwldbcast_recv: delta sync connection with member 1 was lost and regained.567 updates were lost.
Apr 22 14:52:28 2021 firewall kernel: [fw4_1];fwldbcast_recv: received sequence 0x40cf9 (fragm 0, index 1), last processed seq 0x40ac1
Apr 22 14:52:28 2021 firewall kernel: [fw4_1];CLUS-100102-1: Failover member 1 -> member 2 | Reason: Member state has been changed due to restart of the Cluster module
Apr 22 14:52:29 2021 firewall kernel: [fw4_5];mux_get_connh: WARNING: Trying to get connh when mux connection ended. mux_state=ffff880725f94018.
Apr 22 14:52:29 2021 firewall kernel: [fw4_5];cmi_get_connh: there is no connh on mux_state context_id=100
Apr 22 14:52:29 2021 firewall kernel: [fw4_5];mux_get_connh: WARNING: Trying to get connh when mux connection ended. mux_state=ffff880725f94018.
Apr 22 14:52:29 2021 firewall kernel: [fw4_5];cmi_get_connh: there is no connh on mux_state context_id=189
Apr 22 14:52:29 2021 firewall kernel: [fw4_5];mux_get_connh: WARNING: Trying to get connh when mux connection ended. mux_state=ffff880725f94018.
Apr 22 14:52:29 2021 firewall kernel: [fw4_5];cmi_get_connh: there is no connh on mux_state context_id=103
Apr 22 14:52:29 2021 firewall kernel: [fw4_5];mux_get_connh: WARNING: Trying to get connh when mux connection ended. mux_state=ffff880725f94018.
Apr 22 14:52:29 2021 firewall kernel: [fw4_5];cmi_get_connh: there is no connh on mux_state context_id=106

0 Kudos
12 Replies
Tim_Tielens
Contributor

I have experienced something similar on VSX this week.
1 VSX node just rebooted, but cluster_XL failed, rendering all VS's unuseable.

I notice al lot of simi_reoders... 
We are on R80.40 JHF 94

Apr 20 16:51:10 2021 VSXNODE02 kernel: [SIM4];simi_reorder_enqueue_packet: reached the limit of maximum enqueued packets! conn:<xx.x.xx188,47808,xx.x.xx255,47874,17>, fw_key:<xx.x.xx188,47808,xx.x.xx255,47874,17>, app: Route
Apr 20 16:52:47 2021 VSXNODE02 kernel: [SIM4];simi_reorder_enqueue_packet: reached the limit of maximum enqueued packets! conn:<xx.x.xx189,47808,xx.x.xx255,47874,17>, fw_key:<xx.x.xx189,47808,xx.x.xx255,47874,17>, app: Route
Apr 20 16:53:16 2021 VSXNODE02 kernel: [SIM4];simi_reorder_enqueue_packet: reached the limit of maximum enqueued packets! conn:<xx.x.xx188,47808,xx.x.xx255,47874,17>, fw_key:<xx.x.xx188,47808,xx.x.xx255,47874,17>, app: Route
Apr 20 16:54:47 2021 VSXNODE02 kernel: [SIM4];simi_reorder_enqueue_packet: reached the limit of maximum enqueued packets! conn:<xx.x.xx189,47808,xx.x.xx255,47874,17>, fw_key:<xx.x.xx189,47808,xx.x.xx255,47874,17>, app: Route
Apr 20 16:55:25 2021 VSXNODE02 kernel: [SIM4];simi_reorder_enqueue_packet: reached the limit of maximum enqueued packets! conn:<xx.x.xx188,47808,xx.x.xx255,47874,17>, fw_key:<xx.x.xx188,47808,xx.x.xx255,47874,17>, app: Route
Apr 20 16:56:47 2021 VSXNODE02 kernel: [SIM4];simi_reorder_enqueue_packet: reached the limit of maximum enqueued packets! conn:<xx.x.xx189,47808,xx.x.xx255,47874,17>, fw_key:<xx.x.xx189,47808,xx.x.xx255,47874,17>, app: Route
Apr 20 16:57:30 2021 VSXNODE02 kernel: [SIM4];simi_reorder_enqueue_packet: reached the limit of maximum enqueued packets! conn:<xx.x.xx188,47808,xx.x.xx255,47874,17>, fw_key:<xx.x.xx188,47808,xx.x.xx255,47874,17>, app: Route
Apr 20 16:58:47 2021 VSXNODE02 kernel: [SIM4];simi_reorder_enqueue_packet: reached the limit of maximum enqueued packets! conn:<xx.x.xx189,47808,xx.x.xx255,47874,17>, fw_key:<xx.x.xx189,47808,xx.x.xx255,47874,17>, app: Route
Apr 20 16:59:41 2021 VSXNODE02 kernel: [SIM4];simi_reorder_enqueue_packet: reached the limit of maximum enqueued packets! conn:<xx.x.xx188,47808,xx.x.xx255,47874,17>, fw_key:<xx.x.xx188,47808,xx.x.xx255,47874,17>, app: Route
Apr 20 17:00:47 2021 VSXNODE02 kernel: [SIM4];simi_reorder_enqueue_packet: reached the limit of maximum enqueued packets! conn:<xx.x.xx189,47808,xx.x.xx255,47874,17>, fw_key:<xx.x.xx189,47808,xx.x.xx255,47874,17>, app: Route
Apr 20 17:01:50 2021 VSXNODE02 kernel: [SIM4];simi_reorder_enqueue_packet: reached the limit of maximum enqueued packets! conn:<xx.x.xx188,47808,xx.x.xx255,47874,17>, fw_key:<xx.x.xx188,47808,xx.x.xx255,47874,17>, app: Route
Apr 20 17:02:47 2021 VSXNODE02 kernel: [SIM4];simi_reorder_enqueue_packet: reached the limit of maximum enqueued packets! conn:<xx.x.xx189,47808,xx.x.xx255,47874,17>, fw_key:<xx.x.xx189,47808,xx.x.xx255,47874,17>, app: Route
Apr 20 17:04:01 2021 VSXNODE02 kernel: [SIM4];simi_reorder_enqueue_packet: reached the limit of maximum enqueued packets! conn:<xx.x.xx188,47808,xx.x.xx255,47874,17>, fw_key:<xx.x.xx188,47808,xx.x.xx255,47874,17>, app: Route
Apr 20 17:04:47 2021 VSXNODE02 kernel: [SIM4];simi_reorder_enqueue_packet: reached the limit of maximum enqueued packets! conn:<xx.x.xx189,47808,xx.x.xx255,47874,17>, fw_key:<xx.x.xx189,47808,xx.x.xx255,47874,17>, app: Route
Apr 20 17:06:10 2021 VSXNODE02 kernel: [SIM4];simi_reorder_enqueue_packet: reached the limit of maximum enqueued packets! conn:<xx.x.xx188,47808,xx.x.xx255,47874,17>, fw_key:<xx.x.xx188,47808,xx.x.xx255,47874,17>, app: Route
Apr 20 17:06:47 2021 VSXNODE02 kernel: [SIM4];simi_reorder_enqueue_packet: reached the limit of maximum enqueued packets! conn:<xx.x.xx189,47808,xx.x.xx255,47874,17>, fw_key:<xx.x.xx189,47808,xx.x.xx255,47874,17>, app: Route
Apr 20 17:08:22 2021 VSXNODE02 kernel: [SIM4];simi_reorder_enqueue_packet: reached the limit of maximum enqueued packets! conn:<xx.x.xx188,47808,xx.x.xx255,47874,17>, fw_key:<xx.x.xx188,47808,xx.x.xx255,47874,17>, app: Route
Apr 20 17:08:47 2021 VSXNODE02 kernel: [SIM4];simi_reorder_enqueue_packet: reached the limit of maximum enqueued packets! conn:<xx.x.xx189,47808,xx.x.xx255,47874,17>, fw_key:<xx.x.xx189,47808,xx.x.xx255,47874,17>, app: Route
Apr 20 17:11:42 2021 VSXNODE02 syslogd 1.4.1: restart.
Apr 20 17:11:42 2021 VSXNODE02 syslogd: local sendto: Network is unreachable
Apr 20 17:11:42 2021 VSXNODE02 kernel: klogd 1.4.1, log source = /proc/kmsg started.
Apr 20 17:11:42 2021 VSXNODE02 kernel: ACPI: PCI Root Bridge [UNC0] (domain 0000 [bus 7f])

After reboot I am also getting different simi_reorder entry's, like in this sk
Sporadic UDP (clear/encapsulated in tunnel) disconnection during install policy (checkpoint.com)


I see in you logs: (Cluster Control Protocol packets are not received)
Is your CCP link still working ? 
What is the output of cphaprob -a if ?

 

0 Kudos
MartinTzvetanov
Advisor

Hello,

 

Apr 22 14:51:57 2021 firewall kernel: [fw4_1];CLUS-210300-1: Remote member 2 (state STANDBY -> DOWN) | Reason: Interface is down (Cluster Control Protocol packets are not received)

 

This is the reason. I also observed such behavior with some of our customer. Actually the interface never went down but the CCP packets didn't pass for some reason. Not sure if it's a hardware/cable issue or some rare conditions in path (the surrounding devices) do some problems.

0 Kudos
abihsot__
Advisor

Yes, true, it says ccp packets are not received, however sync port did not went down. Yes, now it would be a headache to chase the ghost.

Theoretical question. If cluster is not able to exchange data over sync interface, does it attempt to use any other clustered interfaces?

Also I don't remember setting manual ccp mode. I think it was automatic...

 

cphaprob -a if

CCP mode: Manual (Unicast)

Required interfaces: 3

Required secured interfaces: 1

 

Interface Name:      Status:

Sync (S)             UP

Mgmt                 Non-Monitored

bond-x (LS)    UP

bond-y (LS)     UP

 

S - sync, LM - link monitor, HA/LS - bond type

Virtual cluster interfaces: 13

0 Kudos
MartinTzvetanov
Advisor

by default CP sends CCP packets over the first and last VLAN of every physical interface which is different than what is passing thru SYNC interface. SYNC interface is used for syncing connections, ccp packets are used for interface status checks, if ccp packet is not received CP thinks the interface is down and the mechanism put the whole machine in DOWN state.

0 Kudos
abihsot__
Advisor

Thanks for explanation! In this case my assumption that lower level did not caused this, as we would have got much bigger issues than that. For some reason it is happening only on this particular cluster (we have several more). Difference is that it has more blades enabled than the rest.

enabled_blades
fw urlf appi identityServer SSL_INSPECT content_awareness mon

 

0 Kudos
Timothy_Hall
Champion
Champion

Based on the fact the Cluster Under Load (CUL) is being invoked, your firewall has very high CPU utilization.  As a result, packets are not being processed fast enough thus resulting in the simi_reorder messages and probably disrupting the sync network as well.  The fact that more blades are enabled here would seem to support that assertion, and high CPU is the actual problem to be addressed.

Hopefully this can be fixed with some simple tuning, please post the result of the "Super Seven" for further analysis and recommendations:

S7PAC - Super Seven Performance Assessment Command

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
abihsot__
Advisor

Thanks for reply!

it is physical appliance - CP5800 and it should not be loaded that much. At least what I can see from the graphs, although they usually take data each minute or so (cpview -t). If CUL message appears in the logs that would mean some very short spike occurs  which might not be evident in the averaged graphs. 

 

|Id|Name |Status |Interfaces |Features |
+---------------------------------------------------------------------------------+
|0 |SND |enabled |eth1-01,eth1-02,Sync, |
| | | |Mgmt |Acceleration,Cryptography |
| | | | |Crypto: Tunnel,UDPEncap,MD5, |
| | | | |SHA1,NULL,3DES,DES,AES-128, |
| | | | |AES-256,ESP,LinkSelection, |
| | | | |DynamicVPN,NatTraversal, |
| | | | |AES-XCBC,SHA256,SHA384 |
+---------------------------------------------------------------------------------+

Accept Templates : disabled by Firewall
Layer x Security disables template offloads from rule #130
Throughput acceleration still enabled.
Drop Templates : disabled
NAT Templates : disabled by Firewall
Layer x Security disables template offloads from rule #130
Throughput acceleration still enabled.

##############################

Accelerated conns/Total conns : 23418/42430 (55%)
Accelerated pkts/Total pkts : 641164182222/702748486714 (91%)
F2Fed pkts/Total pkts : 61584304492/702748486714 (8%)
F2V pkts/Total pkts : 4463571955/702748486714 (0%)
CPASXL pkts/Total pkts : 36801796508/702748486714 (5%)
PSLXL pkts/Total pkts : 43062947615/702748486714 (6%)
CPAS pipeline pkts/Total pkts : 0/702748486714 (0%)
PSL pipeline pkts/Total pkts : 0/702748486714 (0%)
CPAS inline pkts/Total pkts : 0/702748486714 (0%)
PSL inline pkts/Total pkts : 0/702748486714 (0%)
QOS inbound pkts/Total pkts : 0/702748486714 (0%)
QOS outbound pkts/Total pkts : 0/702748486714 (0%)
Corrected pkts/Total pkts : 0/702748486714 (0%)

##############################

grep -c ^processor /proc/cpuinfo && /sbin/cpuinfo

8
HyperThreading=enabled

##############################

CPU 0: Mgmt
CPU 1: fw_5
pdpd pepd fwd mpdaemon lpd in.arlogind in.aclientd in.ahclientd rad vpnd rtmd cp_file_convertd usrchkd wsdnsd cprid cpd
CPU 2: fw_3
pdpd pepd fwd mpdaemon lpd in.arlogind in.aclientd in.ahclientd rad vpnd rtmd cp_file_convertd usrchkd wsdnsd cprid cpd
CPU 3: fw_1
pdpd pepd fwd mpdaemon lpd in.arlogind in.aclientd in.ahclientd rad vpnd rtmd cp_file_convertd usrchkd wsdnsd cprid cpd
CPU 4:
CPU 5: fw_4
pdpd pepd fwd mpdaemon lpd in.arlogind in.aclientd in.ahclientd rad vpnd rtmd cp_file_convertd usrchkd wsdnsd cprid cpd
CPU 6: fw_2
pdpd pepd fwd mpdaemon lpd in.arlogind in.aclientd in.ahclientd rad vpnd rtmd cp_file_convertd usrchkd wsdnsd cprid cpd
CPU 7: fw_0
pdpd pepd fwd mpdaemon lpd in.arlogind in.aclientd in.ahclientd rad vpnd rtmd cp_file_convertd usrchkd wsdnsd cprid cpd
All:
Interface eth1-01: has multi queue enabled
Interface eth1-02: has multi queue enabled
Interface Sync: has multi queue enabled

##############################

Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
Mgmt 1500 0 0 0 0 0 0 0 0 0 BMU
Sync 1500 0 1282870294 0 0 0 2718782002 0 0 0 BMRU
bond 1500 0 704374105866 17623074 646557 0 709489543709 0 0 0 BMmRU
bond.x 1500 0 327784405122 0 322164 0 171167668675 0 0 0 BMRU
bond.x 1500 0 161227537 0 0 0 150670944 0 0 0 BMRU
bond.x 1500 0 29198557875 0 1244207 0 108365541001 0 0 0 BMRU
bond.x 1500 0 44580064516 0 0 0 57263728573 0 0 0 BMRU
bond.x 1500 0 3019529408 0 0 0 19747900104 0 0 0 BMRU
bond.x 1500 0 77399 0 0 0 160886 0 0 0 BMRU
bond.x 1500 0 4327867368 0 1 0 3466616751 0 0 0 BMRU
bond.x 1500 0 6407217815 0 42885 0 17878179216 0 0 0 BMRU
bond.x 1500 0 56670995927 0 2309 0 80401647952 0 0 0 BMRU
bond.x 1500 0 70431277162 0 49414 0 103695321264 0 0 0 BMRU
bond.x 1500 0 44427442956 0 4386338 0 50054972297 0 0 0 BMRU
bond.x 1500 0 62956347929 0 569026 0 53686959981 0 0 0 BMRU
bond.x 1500 0 54300156209 0 0 0 43609550514 0 0 0 BMRU
eth1-01 1500 0 365871023747 7491325 322343 0 356896228444 0 0 0 BMsRU
eth1-02 1500 0 338503075104 10131749 324214 0 352593311393 0 0 0 BMsRU
lo 65536 0 979992247 0 0 0 979992247 0 0 0 LdNORU

interface eth1-01: There were no RX drops in the past 0.5 seconds
interface eth1-01 rx_missed_errors : 0
interface eth1-01 rx_fifo_errors : 0
interface eth1-01 rx_no_buffer_count: 0

interface eth1-02: There were no RX drops in the past 0.5 seconds
interface eth1-02 rx_missed_errors : 1853
interface eth1-02 rx_fifo_errors : 0
interface eth1-02 rx_no_buffer_count: 0

##############################

| Output: |
ID | Active | CPU | Connections | Peak
----------------------------------------------
0 | Yes | 7 | 8902 | 175010
1 | Yes | 3 | 7081 | 162162
2 | Yes | 6 | 6909 | 161393
3 | Yes | 2 | 7192 | 165640
4 | Yes | 5 | 7117 | 169984
5 | Yes | 1 | 7209 | 162071

##############################

Processors load
---------------------------------------------------------------------------------
|CPU#|User Time(%)|System Time(%)|Idle Time(%)|Usage(%)|Run queue|Interrupts/sec|
---------------------------------------------------------------------------------
| 1| 0| 10| 90| 10| ?| 40852|
| 2| 8| 14| 77| 23| ?| 40852|
| 3| 9| 12| 79| 21| ?| 40852|
| 4| 8| 12| 80| 20| ?| 40852|
| 5| 0| 10| 90| 10| ?| 40852|
| 6| 6| 15| 79| 21| ?| 40852|
| 7| 4| 13| 82| 18| ?| 40853|
| 8| 5| 11| 84| 16| ?| 40853|
---------------------------------------------------------------------------------

 

Not sure what is so special about April but it looks like something is brewing.

 

Cluster failover history (last 20 failovers since reboot/reset on Tue Jan 5 17:07:01 2021):

No. Time: Transition: CPU: Reason:
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1 Tue Apr 27 16:50:19 2021 Member 1 -> Member 2 00 Member state has been changed due to restart of the Cluster module
2 Thu Apr 22 14:52:24 2021 Member 1 -> Member 2 00 Member state has been changed due to restart of the Cluster module
3 Mon Apr 19 12:02:42 2021 Member 1 -> Member 2 00 Member state has been changed due to restart of the Cluster module
4 Fri Apr 16 12:05:07 2021 Member 1 -> Member 2 00 Member state has been changed due to restart of the Cluster module

0 Kudos
Kaspars_Zibarts
Employee Employee
Employee

I don't want to draw any parallels but our VSX started behaving really strange on R80.40 T91. It was fine for 4 months but then one bright day one VS clustering started playing up. Did the usual suspects like graceful reboot but nothing helped. Case raised with TAC. Usual response - upgrade to latest jumbo before it can be forwarded to R&D. 

And this is where all went to hell - after upgrading standby to T102 it never came up properly. Couple of CPUs were locking up and cluster behaved really strange with some VSes becoming Active/Active. 

Ok, quick action - restore snapshot, but! Problem persists, CPUs locking up / maxed out and cluster unstable. Wow..

Ok, next stop do full rebuild using vsx_util reconfigure. Still it results in the same! High CPU and unstable cluster.

Stopped the cluster member completely to have a breathing space. Need to think..

Not too sure anymore if it's VSX1 that's actually causing all this with some misconfiguration or there are some HW issues on VSX2. 

Trip to datacenter booked for tomorrow so I can pull out cables on VSX2 to see the behaviour

Something smells odd with clustering and R80.40 VSX. CCP encryption maybe? As there's been more articles about VSX clustering I believe

0 Kudos
Kaspars_Zibarts
Employee Employee
Employee

found this SK that has a lot of symptoms that we saw yesterday sk169777 . Already had my suspicions about CCP encryption before I saw the article

will be testing as soon as I get to DC

0 Kudos
abihsot__
Advisor

wow, messy things... Please do let us know how it goes. 

0 Kudos
MartinTzvetanov
Advisor

run cpwd_admin list on the problematic device. 

following this message : Member 2 00 Member state has been changed due to restart of the Cluster module

I suppose cpha module will be with more than 1 restart times.

0 Kudos
abihsot__
Advisor

For each cluster services restart I have the following registered. Tried to search the function name in knowledge base, but of course nothing came up. Probably only R&D knows what it is.

 

# Overhead Command Shared Object Symbol
# ........ ........... ................. .........................................................
#
99.98% fw_worker_4 [kernel.kallsyms] [k] wsis_decode_getc.lto_priv.5943
0.02% fw_worker_4 [kernel.kallsyms] [k] ws_malformed_http_policy_element_decoding_test
0.00% fw_worker_4 [kernel.kallsyms] [k] vunmap_page_range
0.00% fw_worker_4 [kernel.kallsyms] [k] native_sched_clock
0.00% fw_worker_4 [kernel.kallsyms] [k] sched_clock
0.00% fw_worker_4 [kernel.kallsyms] [k] intel_bts_enable_local
0.00% fw_worker_4 [kernel.kallsyms] [k] perf_pmu_enable

 

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events