Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Danny
Champion Champion
Champion

Performance issue: High pdpd load after R80.20 upgrade - identity agents can't connect

After migrating a HP ProLiant DL380 G7 HA-Cluster from R77.30 to R80.20 today I'm experiencing an extremely high CPU usage by caused the pdpd daemon causing all identity agents not being able to connect and authenticate end users. When users are at home in the evening hours everything becomes normal. Anyone experienced this as well? Besides replacing the gateway with a better sized one is there anything we could tune? The onboard NIC's are in use while HCL recommends to avoid it (Ouch). pdpd is already set to use CPU 8.

 

  System     Firewall Cluster Node (HA)
  Type       ProLiant DL380 G7
  OS         Gaia R80.20 JHF (Take 74) @ 64-bit
  CPUSE      Build 1676
  CPU        12 Cores  8 licensed | SMT: - | Load 7.23%
  RAM        14 GB (Free: 0 GB) |  Swapping 176 KB
  SecureXL   On | Multi-Queue Interfaces -
  CoreXL     On (11 Cores) | Dynamic Dispatcher: On

 

@Timothy_Hall , this is the result from your Super7:

 

[Executing:]# fwaccel stat
+---------------------------------------------------------------------------------+
|Id|Name     |Status     |Interfaces               |Features                      |
+---------------------------------------------------------------------------------+
|0 |SND      |enabled    |eth8,eth9,eth10,eth11,   |
|  |         |           |eth4,eth5,eth6,eth7,eth0,|
|  |         |           |eth1,eth2,eth3           |Acceleration,Cryptography     |
|  |         |           |                         |Crypto: Tunnel,UDPEncap,MD5,  |
|  |         |           |                         |SHA1,NULL,3DES,DES,AES-128,   |
|  |         |           |                         |AES-256,ESP,LinkSelection,    |
|  |         |           |                         |DynamicVPN,NatTraversal,      |
|  |         |           |                         |AES-XCBC,SHA256               |
+---------------------------------------------------------------------------------+

Accept Templates : disabled by Firewall
                   Layer FWEXT Security disables template offloads from rule #230
                   Throughput acceleration still enabled.
Drop Templates   : enabled
NAT Templates    : disabled by Firewall
                   Layer FWEXT Security disables template offloads from rule #230
                   Throughput acceleration still enabled.
[Executing:]# fwaccel stats -s
Accelerated conns/Total conns : 816/44272 (1%)
Accelerated pkts/Total pkts   : 5463775040/5959914034 (91%)
F2Fed pkts/Total pkts         : 496138994/5959914034 (8%)
F2V pkts/Total pkts           : 20585639/5959914034 (0%)
CPASXL pkts/Total pkts        : 498278614/5959914034 (8%)
PSLXL pkts/Total pkts         : 2212031456/5959914034 (37%)
CPAS inline pkts/Total pkts   : 0/5959914034 (0%)
PSL inline pkts/Total pkts    : 0/5959914034 (0%)
QOS inbound pkts/Total pkts   : 0/5959914034 (0%)
QOS outbound pkts/Total pkts  : 0/5959914034 (0%)
Corrected pkts/Total pkts     : 0/5959914034 (0%)
[Executing:]# grep -c  ^processor  /proc/cpuinfo && /sbin/cpuinfo
12
HyperThreading=disabled
[Executing:]# fw ctl affinity -l -r | more
CPU 0:  eth8 eth9 eth10 eth11 eth4 eth5 eth6 eth7 eth0 eth1 eth2 eth3
CPU 1:  fw_5
        in.geod usrchkd pepd scanengine_s vpnd mpdaemon pdpd in.acapd in.emaild.smtp lpd in.asessiond rtmd in.msd fwd rad cpd cprid
CPU 2:  fw_8
        in.geod usrchkd pepd scanengine_s vpnd mpdaemon pdpd in.acapd in.emaild.smtp lpd in.asessiond rtmd in.msd fwd rad cpd cprid
CPU 3:  fw_2
        in.geod usrchkd pepd scanengine_s vpnd mpdaemon pdpd in.acapd in.emaild.smtp lpd in.asessiond rtmd in.msd fwd rad cpd cprid
CPU 4:  fw_9
        in.geod usrchkd pepd scanengine_s vpnd mpdaemon pdpd in.acapd in.emaild.smtp lpd in.asessiond rtmd in.msd fwd rad cpd cprid
CPU 5:  fw_3
        in.geod usrchkd pepd scanengine_s vpnd mpdaemon pdpd in.acapd in.emaild.smtp lpd in.asessiond rtmd in.msd fwd rad cpd cprid
CPU 6:  fw_6
        in.geod usrchkd pepd scanengine_s vpnd mpdaemon pdpd in.acapd in.emaild.smtp lpd in.asessiond rtmd in.msd fwd rad cpd cprid
CPU 7:  fw_0
        in.geod usrchkd pepd scanengine_s vpnd mpdaemon in.acapd in.emaild.smtp lpd in.asessiond rtmd in.msd rad cpd cprid
CPU 8:
CPU 9:  fw_4
        in.geod usrchkd pepd scanengine_s vpnd mpdaemon in.acapd in.emaild.smtp lpd in.asessiond rtmd in.msd rad cpd cprid
CPU 10: fw_7
        in.geod usrchkd pepd scanengine_s vpnd mpdaemon in.acapd in.emaild.smtp lpd in.asessiond rtmd in.msd rad cpd cprid
CPU 11: fw_1
        in.geod usrchkd pepd scanengine_s vpnd mpdaemon in.acapd in.emaild.smtp lpd in.asessiond rtmd in.msd rad cpd cprid
All:
The current license permits the use of CPUs 0, 1, 2, 3, 4, 5, 6, 7 only.
[Executing:]# netstat -ni | more
Kernel Interface table
Iface       MTU Met    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0       1500   0   462740      0      0      0 20035876      0      0      0 BMRU
eth1       1500   0        0      0      0      0        0      0      0      0 BMU
eth2       1500   0    14380      0      0      0       66      0      0      0 BMRU
eth3       1500   0        0      0      0      0        0      0      0      0 BMU
eth4       1500   0 703032870      0      0      0 717938649      0      0      0 BMRU
eth4.604   1500   0  5648687      0      0      0 15032263      0      0      0 BMRU
eth4.614   1500   0  2192997      0      0      0  4829218      0      0      0 BMRU
eth4.624   1500   0 456325848      0      0      0 518681961      0      0      0 BMRU
eth4.634   1500   0 230299374      0      0      0 181932000      0      0      0 BMRU
eth4.670   1500   0    33711      0      0      0    14341      0      0      0 BMRU
eth4.742   1500   0  8437521      0      0      0  3943037      0      0      0 BMRU
eth4.770   1500   0    90401      0      0      0   386716      0      0      0 BMRU
eth5       1500   0 238714661      0      0      0 257576241      0      0      0 BMRU
eth5.602   1500   0 58496455      0      0      0 54996071      0      0      0 BMRU
eth5.605   1500   0 180064740      0      0      0 202893390      0      0      0 BMRU
eth5.615   1500   0   149135      0      0      0   443051      0      0      0 BMRU
eth6       1500   0 1084032057      0    321      0 1031148166      0      0      0 BMRU
eth6.603   1500   0 28780589      0      0      0 29674771      0      0      0 BMRU
eth6.606   1500   0 200973355      0      0      0 203472426      0      0      0 BMRU
eth6.616   1500   0       60      0      0      0     1375      0      0      0 BMRU
eth6.623   1500   0 685674334      0      0      0 679943082      0      0      0 BMRU
eth6.626   1500   0    48853      0      0      0    55223      0      0      0 BMRU
eth6.633   1500   0 89167501      0      0      0 66527473      0      0      0 BMRU
eth6.724   1500   0 79383049      0      0      0 55542371      0      0      0 BMRU
eth7       1500   0 1510933184      0   4460      0 1715055862      0      0      0 BMRU
eth8       1500   0 410325078      0   2132      0 14642643      0      0      0 BMRU
eth8.608   1500   0 395668331      0      0      0   466538      0      0      0 BMRU
eth8.800   1500   0 14652423      0      0      0 14176945      0      0      0 BMRU
eth9       1500   0  4418240      0      0      0 43687204      0      0      0 BMRU
eth10      1500   0 1050639628      0      0      0 934246991      0      0      0 BMRU
eth10.601  1500   0 530894165      0      0      0 547398536      0      0      0 BMRU
eth10.611  1500   0   209048      0      0      0   154341      0      0      0 BMRU
eth10.621  1500   0 456124650      0      0      0 360206871      0      0      0 BMRU
eth10.631  1500   0 63407433      0      0      0 29237069      0      0      0 BMRU
eth11      1500   0 987797444      0    182      0 1456539685      0      0      0 BMRU
eth11.600  1500   0 987793112      0      0      0 1468969765      0      0      0 BMRU
lo        16436   0 54653517      0      0      0 54653517      0      0      0 LRU
[Executing:]# fw ctl multik stat
ID | Active  | CPU    | Connections | Peak
----------------------------------------------
 0 | Yes     | 7      |        5557 |    15247
 1 | Yes     | 11     |        5542 |     8577
 2 | Yes     | 3      |        5728 |     8341
 3 | Yes     | 5      |        5620 |     8465
 4 | Yes     | 9      |        5850 |     8675
 5 | Yes     | 1      |        5550 |     8470
 6 | Yes     | 6      |        5612 |     8364
 7 | Yes     | 10     |        5796 |     8525
 8 | Yes     | 2      |        5621 |     8392
 9 | Yes     | 4      |        5739 |     8788
[Executing:]# cpstat os -f multi_cpu



Processors load
---------------------------------------------------------------------------------
|CPU#|User Time(%)|System Time(%)|Idle Time(%)|Usage(%)|Run queue|Interrupts/sec|
---------------------------------------------------------------------------------
|   1|           0|            76|          24|      76|        ?|          4922|
|   2|           8|            32|          60|      40|        ?|          4922|
|   3|          11|            29|          60|      40|        ?|          4923|
|   4|           9|            31|          60|      40|        ?|          4923|
|   5|          12|            31|          57|      43|        ?|          4923|
|   6|           9|            32|          59|      41|        ?|          4924|
|   7|          13|            26|          62|      38|        ?|          4924|
|   8|           7|            31|          62|      38|        ?|          4924|
|   9|           0|             2|          98|       2|        ?|          4925|
|  10|           9|            26|          65|      35|        ?|          4925|
|  11|          12|            26|          62|      38|        ?|          4926|
|  12|           7|            29|          63|      37|        ?|          4926|
---------------------------------------------------------------------------------
[Executing:]# fw ctl affinity -l -a
eth8: CPU 0
eth9: CPU 0
eth10: CPU 0
eth11: CPU 0
eth4: CPU 0
eth5: CPU 0
eth6: CPU 0
eth7: CPU 0
eth0: CPU 0
eth1: CPU 0
eth2: CPU 0
eth3: CPU 0
fw_0: CPU 7
fw_1: CPU 11
fw_2: CPU 3
fw_3: CPU 5
fw_4: CPU 9
fw_5: CPU 1
fw_6: CPU 6
fw_7: CPU 10
fw_8: CPU 2
fw_9: CPU 4
in.geod: CPU 1 2 3 4 5 6 7 9 10 11
usrchkd: CPU 1 2 3 4 5 6 7 9 10 11
pepd: CPU 1 2 3 4 5 6 7 9 10 11
scanengine_s: CPU 1 2 3 4 5 6 7 9 10 11
vpnd: CPU 1 2 3 4 5 6 7 9 10 11
mpdaemon: CPU 1 2 3 4 5 6 7 9 10 11
pdpd: CPU 8
in.acapd: CPU 1 2 3 4 5 6 7 9 10 11
in.emaild.smtp: CPU 1 2 3 4 5 6 7 9 10 11
lpd: CPU 1 2 3 4 5 6 7 9 10 11
in.asessiond: CPU 1 2 3 4 5 6 7 9 10 11
rtmd: CPU 1 2 3 4 5 6 7 9 10 11
in.msd: CPU 1 2 3 4 5 6 7 9 10 11
fwd: CPU 1 2 3 4 5 6
rad: CPU 1 2 3 4 5 6 7 9 10 11
cpd: CPU 1 2 3 4 5 6 7 9 10 11
cprid: CPU 1 2 3 4 5 6 7 9 10 11
The current license permits the use of CPUs 0, 1, 2, 3, 4, 5, 6, 7 only.

 

Thanks in advance for any comments and suggestions.

10 Replies
Timothy_Hall
Legend Legend
Legend

System looks like it has plenty of resources, I assume CPU 8 is running close to 100%?  Pretty sure pdpd is a mostly single-threaded process, but it might be worth affining it to another CPU in addition to 8 (add CPU 9) and see if that helps.  Please provide output of ps -efwww | grep pdpd so we can see memory size of the pdpd process.  Anything interesting getting dumped into $FWDIR/log/pdpd.elg?  Are you seeing a lot of wio% in top?

RAM        14 GB (Free: 0 GB) |  

Huh? Please also provide output of free -m.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
Danny
Champion Champion
Champion

Tim, thanks for your reply.

We used SAM rules to block all IA traffic to the gateway for this moment. So currently the CPUs look like this:

 

[Executing:]# top
top - 16:10:03 up 1 day, 52 min,  3 users,  load average: 4.40, 4.74, 5.28
Tasks: 298 total,   6 running, 292 sleeping,   0 stopped,   0 zombie
Cpu0  :  0.0%us,  0.0%sy,  0.0%ni, 50.0%id,  0.0%wa,  0.0%hi, 50.0%si,  0.0%st
Cpu1  : 33.3%us,  0.0%sy,  0.0%ni, 33.3%id,  0.0%wa,  0.0%hi, 33.3%si,  0.0%st
Cpu2  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu3  :  0.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu4  :  0.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu5  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu6  :  0.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu7  :  0.0%us, 50.0%sy,  0.0%ni, 50.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu8  :  0.0%us,100.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu9  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu10 : 50.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi, 50.0%si,  0.0%st
Cpu11 :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  14227232k total, 12998860k used,  1228372k free,   499624k buffers
Swap:  8385920k total,      176k used,  8385744k free,  1669980k cached
[Executing:]# ps -efwww | grep pdpd
admin    13252 11854  0 16:08 pts/4    00:00:00 grep pdpd
admin    21669  7167 12 13:11 ?        00:21:49 pdpd 0 -t
[Executing:]# free -m
             total       used       free     shared    buffers     cached
Mem:         13893      13154        738          0        487       2105
-/+ buffers/cache:      10561       3332
Swap:         8189          0       8189

 

We'll followed your suggestion and changed the CPU affinity and added one more core to pdpd.

 

[Executing]# fw ctl affinity -l
eth8: CPU 0
eth9: CPU 0
eth10: CPU 0
eth11: CPU 0
eth4: CPU 0
eth5: CPU 0
eth6: CPU 0
eth7: CPU 0
eth0: CPU 0
eth1: CPU 0
eth2: CPU 0
eth3: CPU 0
fw_0: CPU 7
fw_1: CPU 11
fw_2: CPU 3
fw_3: CPU 5
fw_4: CPU 9
fw_5: CPU 1
fw_6: CPU 6
fw_7: CPU 10
fw_8: CPU 2
fw_9: CPU 4
in.geod: CPU 1 2 3 4 5 6 7 9 10 11
usrchkd: CPU 1 2 3 4 5 6 7 9 10 11
pepd: CPU 1 2 3 4 5 6 7 9 10 11
scanengine_s: CPU 1 2 3 4 5 6 7 9 10 11
vpnd: CPU 1 2 3 4 5 6 7 9 10 11
mpdaemon: CPU 1 2 3 4 5 6 7 9 10 11
pdpd: CPU 8 9
in.acapd: CPU 1 2 3 4 5 6 7 9 10 11
in.emaild.smtp: CPU 1 2 3 4 5 6 7 9 10 11
lpd: CPU 1 2 3 4 5 6 7 9 10 11
in.asessiond: CPU 1 2 3 4 5 6 7 9 10 11
rtmd: CPU 1 2 3 4 5 6 7 9 10 11
in.msd: CPU 1 2 3 4 5 6 7 9 10 11
fwd: CPU 1 2 3 4 5 6
rad: CPU 1 2 3 4 5 6 7 9 10 11
cpd: CPU 1 2 3 4 5 6 7 9 10 11
cprid: CPU 1 2 3 4 5 6 7 9 10 11
The current license permits the use of CPUs 0, 1, 2, 3, 4, 5, 6, 7 only.

 

The license only permits 8 CPUs but currently all CPUs are used because the trial license is still active as the gateway wasn't restarted since the license was attached.

 

[Executing:]# fw ctl affinity -l -r
CPU 0:  eth8 eth9 eth10 eth11 eth4 eth5 eth6 eth7 eth0 eth1 eth2 eth3
CPU 1:  fw_5
        in.geod usrchkd pepd scanengine_s vpnd mpdaemon in.acapd in.emaild.smtp lpd in.asessiond rtmd in.msd fwd rad cpd cprid
CPU 2:  fw_8
        in.geod usrchkd pepd scanengine_s vpnd mpdaemon in.acapd in.emaild.smtp lpd in.asessiond rtmd in.msd fwd rad cpd cprid
CPU 3:  fw_2
        in.geod usrchkd pepd scanengine_s vpnd mpdaemon in.acapd in.emaild.smtp lpd in.asessiond rtmd in.msd fwd rad cpd cprid
CPU 4:  fw_9
        in.geod usrchkd pepd scanengine_s vpnd mpdaemon in.acapd in.emaild.smtp lpd in.asessiond rtmd in.msd fwd rad cpd cprid
CPU 5:  fw_3
        in.geod usrchkd pepd scanengine_s vpnd mpdaemon in.acapd in.emaild.smtp lpd in.asessiond rtmd in.msd fwd rad cpd cprid
CPU 6:  fw_6
        in.geod usrchkd pepd scanengine_s vpnd mpdaemon in.acapd in.emaild.smtp lpd in.asessiond rtmd in.msd fwd rad cpd cprid
CPU 7:  fw_0
        in.geod usrchkd pepd scanengine_s vpnd mpdaemon in.acapd in.emaild.smtp lpd in.asessiond rtmd in.msd rad cpd cprid
CPU 8:  pdpd
CPU 9:  fw_4
        in.geod usrchkd pepd scanengine_s vpnd mpdaemon pdpd in.acapd in.emaild.smtp lpd in.asessiond rtmd in.msd rad cpd cprid
CPU 10: fw_7
        in.geod usrchkd pepd scanengine_s vpnd mpdaemon in.acapd in.emaild.smtp lpd in.asessiond rtmd in.msd rad cpd cprid
CPU 11: fw_1
        in.geod usrchkd pepd scanengine_s vpnd mpdaemon in.acapd in.emaild.smtp lpd in.asessiond rtmd in.msd rad cpd cprid
All:
The current license permits the use of CPUs 0, 1, 2, 3, 4, 5, 6, 7 only.

 

 Seems as we need to change a affinities further as CPU 9 also servers for a lot of other services. Please advice.

0 Kudos
Timothy_Hall
Legend Legend
Legend

pdpd doesn't seem to consuming excessive amounts of memory.  Your top output shows 100% sy utilization for CPU 8 which looks kind of strange, would think it would almost all be in "us" space if pdpd was properly affined to CPU 8.

Do you have "Assume that only one user is connected per computer" check box set in your Identity Awareness settings?

Are you Access Roles typically specifying Networks as well as Users/Groups?  If so are there a lot of network exclusions (group w/ exclusion) referenced in those Access Roles?

Do the problems seem to be worse right after a policy install when pdp is updating all its group associations?

Anything interesting being dumped into $FWDIR/log/pdpd.elg?

 

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
Danny
Champion Champion
Champion

I've also checked sk86560 - High CPU utilization by PDPD daemon but it doesn't apply as there is no AD query used for IA. This IA setup only uses Identity Agents, Terminal Servers and Captive Portal.

$FWDIR/log/pdpd.elg is full of OK's and shows the overwhelming IA agents queries, it has a backlog of only 8 hours:

[Executing]# ls -la pdpd*
-rw-r--r-- 1 admin root  10285137 May 10 04:45 pdpd.elg
-rw-r--r-- 1 admin root  10485786 May 10 03:44 pdpd.elg.0
-rw-r--r-- 1 admin root  10485878 May 10 02:43 pdpd.elg.1
-rw-r--r-- 1 admin root  10485766 May 10 01:42 pdpd.elg.2
-rw-r--r-- 1 admin root  10485891 May 10 00:45 pdpd.elg.3
-rw-r--r-- 1 admin root  10485810 May  9 23:53 pdpd.elg.4
-rw-r--r-- 1 admin root  10485859 May  9 22:57 pdpd.elg.5
-rw-r--r-- 1 admin root  10485788 May  9 21:57 pdpd.elg.6
-rw-r--r-- 1 admin root  10486192 May  9 20:58 pdpd.elg.7
-rw-r--r-- 1 admin root  10485824 May  9 20:02 pdpd.elg.8
-rw-rw---- 1 admin users        0 May  8 16:58 pdpd_cli.elg

Thanks for pointing me to sk122352 - High CPU usage after policy installation when pdpd is running regarding groups with exclusions. I've checked this and there are no groups with exclusion configured at all in the security policy. There are just 36 access roles typically specifying Networks as well as Users/Groups. The CPU load issue exists even when there was no recent policy install given that the current SAM rules are deleted which prevent the IA users from connecting to save the fw from overload.

So it looks as all we can do is to tune the CPU core affinities. You said it looks kind of strange. What could we do?

0 Kudos
Sven_Glock
Advisor

Hi Danny,

how many users are connected to your PDPD?

If you have a huge amount of users  identity agents can cause a kind of DDoS-attack once the PDPD is not available for a while.
Even when it is back only  PDPD could be overloaded with all the agents trying to reconnect/reauthenticate.

There are options in the global properties you can tune to avoid this behavior. Its about reauthentication as far as I remember.

But actually I can not tell you the exact name of the parameter.

 

Regards

Sven

 

0 Kudos
Timothy_Hall
Legend Legend
Legend

It was strange in that pdpd was affined to Core 8, yet in your top output CPU 8 showed 100% execution in sy space.  Would expect pdpd to be running in us space mostly.   For giggles you might want to apply a demo license and reboot, just wondering if the licensing limit vs. the number of physical cores is having some strange effect here.  Technically Core 8 where you have affined pdpd is not licensed (0-7 are), not sure how that would have an effect but probably worth a shot.

Interestingly it looks like there were several enhancements to how the Identity Agents you're using are handled in R80.20, from the release notes:

  • Improved SSO Transparent Kerberos Authentication for Identity Agent, LDAP groups are extracted from the Kerberos ticket
  • Identity Agent automatic reconnection to prioritized PDP gateways

No real recommendations here, just giving you an area to perhaps focus on.  The Identity Agents seem like the most logical place to focus, can't see how the captive portal or terminal server agents would spike pdpd like that.

I assume you have checked for pdpd core dumps and that daemon is not flat-out crashing?

 

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
Alex_Lam1
Contributor

Hello

 

What was the fixed?

We have similar issue.

We are on take 47. When we patch to take 87, all things are slowing down. 

 

Regards

Alex

0 Kudos
CheckPointerXL
Advisor
Advisor

Hello Timothy,

It Is correct that pdpd could be multi-thread starting from r81.20? I Remember to have listen It in some webinar but i cannot find any documentation. 

Thank you

0 Kudos
PhoneBoy
Admin
Admin

That is correct, yes.
It is listed as a new feature on the R81.20 release page, actually: https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solut...
Specifically: "Improved resiliency, scalability, and stability for PDPs and Identity Broker. Additional threads handle authentication and authorization flows."

0 Kudos
Wolfgang
Authority
Authority

Danny,

we had a similar problem end of 2018 upgrading from R77.30 to 80.20.

Our problem was not pdpd, we could see this very high CPU with vpnd and emaild.smtp.

We too had the problem of licensed core regarding the existing. With the upgrade to 80.20 we did an upgrade of all firmware in the open server. With that, all cores are again visible to the OS, they were configured to the licensed count under R77.30.

After configure the cores right to the licensed count everything was fine. We had a TAC case open but with no result. And TAC was not really sure that this license difference couldn‘t be problematic.

I don‘t know if this help,because you are using a trial license and this should allow using of all cores, but maybe you have look at this.

Regardless of your problem, it looks like having more cores as licensed results sometimes in mysterious behaviour.

Wolfgang

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events