Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
piou_piou
Explorer
Jump to solution

CPU intensive connections "TCP:empowerid"

Hello everyone,

 

Due to the current situation, we all know lots of users are working remotely.

I am having a weird problem on my R80.30 cluster (5400 appliances) :

The CPU is dangerously increasing during daytime (it reached 100% today). I noticed via cpview that most of the CPU is consumed by a few connections :

Active FW - cpview CPU.jpg

These connections are always using TCP/7080 port and are displayed as "TCP:empowerid".

This is always from a 172.16.50.x host (which is our remote access VPN users pool) to 10.75.30.248 which is one of our Cisco Jabber server. Users and IP can be different, it seems to be happening randomly in the remote access VPN pool.

I managed to lower the CPU by manually disconnecting the involved users as you can see on the following graph at 14h and 16h (CPU is in yellow) :

Active FW - CPU last day.jpg

Then as soon as someone is starting to work in the working, CPU is increasing like crazy... and eventually reaches 100%.

Same problem, same workaround for now, and here is the last graph I have :

Active FW - CPU last hour.jpg

I added a rule to drop the TCP/7080 service for now it's working properly, but we may need to accept this service later in order to make Jaber calls work when working remotely (it doesn't work for now and I've no clue why but its another topic).

 

Here's my GW's version :

cpinfo -y all.PNG

 

Does someone has already seen this before?  😞

 

Thanks for your help!

piou_piou

0 Kudos
1 Solution

Accepted Solutions
Timothy_Hall
Champion Champion
Champion

Yes, what you are seeing is classic elephant flow behavior; Check Point calls these heavy connections.  Please see my CPX 2020 presentation titled "Big Game Hunting: Elephant Flows" based on a chapter of my book which goes through the available identification tools and possible remediation options:

https://community.checkpoint.com/fyrhh23835/attachments/fyrhh23835/member-exclusives/432/3/CPX_Big_G...

Edit: It is also possible these particular connections have to be processed F2F/slowpath for some reason, to see which path these connections are in run fwaccel conns and search for the connection and port attributes.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com

View solution in original post

2 Replies
Timothy_Hall
Champion Champion
Champion

Yes, what you are seeing is classic elephant flow behavior; Check Point calls these heavy connections.  Please see my CPX 2020 presentation titled "Big Game Hunting: Elephant Flows" based on a chapter of my book which goes through the available identification tools and possible remediation options:

https://community.checkpoint.com/fyrhh23835/attachments/fyrhh23835/member-exclusives/432/3/CPX_Big_G...

Edit: It is also possible these particular connections have to be processed F2F/slowpath for some reason, to see which path these connections are in run fwaccel conns and search for the connection and port attributes.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
piou_piou
Explorer

Hello Timothy,

 

Thanks a lot, very useful resource!

I'll try some of these remediation options in the next days.

 

piou_piou

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events