- Products
- Learn
- Local User Groups
- Partners
- More
Call For Papers
Your Expertise, Our Stage
Ink Dragon: A Major Nation-State Campaign
March 11th @ 5pm CET / 12pm EDT
AI Security Masters E5:
Powering Prevention: The AI Driving Check Point’s ThreatCloud
The Great Exposure Reset
AI Security Masters E4:
Introducing Cyata, Securing the Agentic AI Era
CheckMates Go:
CheckMates Fest
Hi Check Point CheckMates Community,
We will see how to enable CoreXL within a VSX Cluster environment. CoreXL is a performance-enhancing technology for Security Gateways on multi-core processing platforms. Multiple Check Point Firewall instances run in parallel across multiple CPU cores. It is important to emphasize that this activity must be performed during a maintenance window, as brief interruptions may occur due to CPU adjustments. There are highly sensitive operations that cannot afford to experience this type of impact. I have encountered cases where customers do not raise concerns. Still, there are other scenarios involving critical banking services where the strategy and logistics of the activity must be carefully coordinated within a timeframe that allows the business to tolerate the slight connectivity intermittency shown in the video.
We're increasing this VS instance to 3 CPUs to improve performance in this external layer.
Any additional input is welcome, dear community.
Regards,
Hi,
This is indeed standard behavior and changing the number of firewall instances will cause some downtime.
So also monitor memory consumption, but with today's appliances, that should no be a problem unless you have a lot of Virtual Systems configured.
Regards,
Martijn
Thanks for your feedback.
Couple of notes about CoreXL:
Since by default only 1 IPv4 and only 1 IPv6 coreXL is assigned on VS, I dont really know if CoreXL Dynamic Balancing (former Dynamic Split) makes sense in VSX.
Dynamic Balancing makes sense, because it affects the base gateway level not per VS. The SND cores (shared for all VSs) and the CoreXL pool of cores that each VS's FWK threads may utilise will be dynamically balanced to ensure that the (statically configured) CoreXL per VS configuration is effectively balanced on the system.
Each new instance of a VS consumes memory, and more blades = more memory required (per instance) -check CPView to monitor them.
CoreXL allows for more specific tuning too.
Check out these resources:
https://support.checkpoint.com/results/sk/sk98348
https://support.checkpoint.com/results/sk/sk39555
https://www.youtube.com/watch?v=POcSk6B4gCE
Hi Don_Paterson:
Is there any method to determine how much memory is consumed by a single CoreXL instance?
Recently, our customer has reported that the firewall memory utilization remains consistently high (above 80%). They are requesting assistance in reducing the high memory condition.
We would like to understand whether there is a way to measure or estimate the memory usage per instance, so we can better analyze and optimize the current resource allocation.
How is your customer measuring the memory usage? Please review this SK before diving too deep into the system. https://support.checkpoint.com/results/sk/sk32206
I think the SK may be outdated and needs updating (I just left then some feedback on it).
Since Linux 3.14 (2014) the kernel introduced MemAvailable.
This is the correct modern metric.
The SK does not mention MemAvailable.
[Expert@mgmt:0]# free -h
total used free shared buff/cache available
Mem: 7.5G 3.7G 281M 465M 3.5G 2.7G
Swap: 15G 10M 15G
Also:
grep MemAvailable /proc/meminfo
and
vmstat > free
Free Real Memory = MemFree + Buffers + Cached
For versions 2.6 and 3.10 that formula is correct.
cpview > Overview > Memory: Physical Free MB shows the same output as free -h > available
cpview is using the correct modern metric.
Check Point has used three different RHEL Kernel version in the last three major versions (RHEL 7 to 9 between R81.20 (3.10) and R82.10). This is a change in pattern for Check Point.
RHEL Kernel behaviours and memory handling has changed over the versions.
There are rabbit holes to go down there.
Here is a thread discussing memory monitoring. Started in 2017 but scroll down to the 2026 replies.
Tim covers CPU and Memory analysis in his Gaia 4.18 course.
What version/s are they using?
Is that a standard Security Gateway or VSX gateway? This thread has been about VSX so far.
@emmap is on point to ask about how they are analysing memory usage/consumption.
See my reply for more details.
You can start with cpview (for SG or VSX gw)
For VSX:
I think of it as context aware. Meaning that when you are in a VS environment in the CLI (for example, use vsenv 3) and then run cpview you will see the cpview for the VS (VSID 3)
CPView SK for details:
https://support.checkpoint.com/results/sk/sk101878
In Check Point you can do deep analysis (on of the benefits of expert mode and Check Point in general) but you should avoid it if you can and it is just about understanding the analysis commands and their output and being able to interpret them.
Blades consume memory and connections consume memory (more per connection depending on the blades used).
The command fw ctl pstat can be good for deeper analysis.
https://support.checkpoint.com/results/sk/sk98348#Initial%20diagnostics%20-%20Memory
I would also think about the hcp -r all command to get a good overall view.
It is a VSX gateway, with 8 Virtual Systems (VSs) enabled.
They are using the asg perf command to monitor memory utilization, which typically ranges between 75% and 85%.
I also checked the HCP report, and there were no alerts related to memory usage.
However, when checking the output of free -k -t -h, it shows that the free memory is only around 1.3 GB.
How should we interpret these values, and how can we explain that this behavior is considered normal?
The first thing that I looked at was available.
It looks like the system is probably behaving normally and healthy.
That is good news because this is a busy gateway solution with a lot of traffic and connection and some busy Virtual Systems (some more than others).
The “low free memory” seen in free -k -t -h is expected on Linux and does not mean the gateway is running out of usable memory.
Linux intentionally uses RAM for caching
Linux tries to avoid leaving RAM unused. Any memory not used by applications is used for:
Filesystem cache (page cache)
Buffers
Kernel slab allocations
This improves performance because disk reads can be served from RAM.
From the free output:
Field Value
Total RAM 61 GB
Used 44 GB
Free 1.3 GB
Buff/Cache 16 GB
Available 14 GB
The key metric is Available memory, not Free memory.
Available = memory that can be immediately reclaimed if needed.
The gateway actually has around 14 GB available, not 1.3 GB.
Linux will release cache automatically when applications need RAM.
VSX can use a lot of memory because of the virtualisation and multiple CoreXL instances.
VSX with 8 Virtual Systems:
Each VS has its own
The report:
Process RAM
Fwk 16.6 GB
cpview_api_service 4.8 GB
pdpd 4.4 GB <-- Identity Awareness
wstlsd 3.7 GB <-- HTTPS Inspection (SSL Inspection)
pep 3.2 GB <-- Identity Awareness
This looks likely to be expected for a busy VSX system handling:
Throughput: 3.3 Gbps
Connections: 1.2M
Packet rate: 637K pps
If you have any concerns then I would recommend opening a ticket with TAC and ask for their advice.
They can confirm the health status is good and maybe advice on further tuning steps or other actions.
You can also ask them about future growth and capacity handling.
I would also recommend that you talk to your local office Presales/Security Engineer about any concerns.
Regards,
Don
Thank you for your response.
I have already submitted a case to TAC regarding this issue, and the feedback from Support also indicates that the system is operating normally and in a healthy state.
Hopefully, this explanation will be sufficient for the customer to accept the current situation.
You are welcome.
I hope it goes well with the customer.
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
| User | Count |
|---|---|
| 36 | |
| 17 | |
| 16 | |
| 12 | |
| 12 | |
| 10 | |
| 7 | |
| 7 | |
| 7 | |
| 7 |
Thu 12 Mar 2026 @ 05:00 PM (CET)
AI Security Masters Session 5: Powering Prevention: The AI Driving Check Point’s ThreatCloudThu 12 Mar 2026 @ 05:00 PM (CET)
AI Security Masters Session 5: Powering Prevention: The AI Driving Check Point’s ThreatCloudTue 17 Mar 2026 @ 03:00 PM (CET)
From SASE to Hybrid Mesh: Securing Enterprise AI at Scale - EMEATue 17 Mar 2026 @ 02:00 PM (EDT)
From SASE to Hybrid Mesh: Securing Enterprise AI at Scale - AMERTue 24 Mar 2026 @ 06:00 PM (COT)
San Pedro Sula: Spark Firewall y AI-Powered Security ManagementThu 26 Mar 2026 @ 06:00 PM (COT)
Tegucigalpa: Spark Firewall y AI-Powered Security ManagementAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY