- Products
- Learn
- Local User Groups
- Partners
- More
MVP 2026: Submissions
Are Now Open!
What's New in R82.10?
10 December @ 5pm CET / 11am ET
Announcing Quantum R82.10!
Learn MoreOverlap in Security Validation
Help us to understand your needs better
CheckMates Go:
Maestro Madness
I was wondering if anyone has actually deployed R81 in a VSX setup yet and if this has any reported issues?
I'm looking to upgrade my R80.20 VSX setup to R80.40 or R81, would like to move to R81 but I think it may be a little too early for this.
Also I know the recommendation is to rebuild, but in the current climate remote upgrade is preferred method. So I will likely do an inline upgrade; from what I can tell kernel version would get upgraded, multi-queue turned on and other parameters turned on by default such are CORE load balancing parameter (SK168513).
Clearly new filesystem would not get used, but I don't see this being hugely important at the gateway side (happy to be educated on this if I'm wrong).
If we have dynamic balancing enabled, how does this affect monitoring fwk process load on a per VS basis? At the moment we use SNMP to determine the number of fwk processes assigned to a VS; with dynamic balancing turned on I can this this ability to historically track utilisation at a VS level will break.
I'm not sure if I understood correctly, but Dynamic Balancing doesn't change the number of fwk processes/threads, it simply changes the set of cores they are affined to.
ok - may sounds silly, but it would be good to see this being monitored then load add and watch the process.
So what your saying is potentially I could have a single core assigned to a VS (example), and if the load increases dynamic balancing will may add additional cores to that fwk process?
Also with TP blades such as IPS, and the bypass threshold being 70% (default) I assume the decision to add/remove cores is also aligned i.e. the CPU utilisations of a a VS /FWK process should not go over 70% in order to ensure blades are not bypassed due to load.
let's take an 8 cores (no HTT) machine example:
cores 0-1 for SNDS
cores 2-7 for FWKs (of all VSs)
If SNDs are working harder, the split will change to:
cores 0-2 for SNDS
cores 3-7 for FWKs (of all VSs)
That of course, will balance the load, reducing SNDs', and increasing FWKs'.
Dynamic Balancing ensures FWKs load will not pass the 50% threshold, given the SNDs are not twice more loaded, that is of course configurable.
Many thanks
Are there more R81 VSX deployments out there? I've been upgrading a cluster of 26000T who ran the out-of-the-box R80.30 to R81 Take 44.
The first machine ran the in-place CPUSE upgrade flawlessly, but after a few hours of production time, the root partition filled up to 100% with plenty of directories and files appearing all over the place in the CTX directories I don't see on other systems still running R80.40. SR open. In all fairness, this issue began to appear on r80.30 and I hoped the upgrade would solve it.
The second unit didn't succeed with the in-place upgrade, with messages the configuration can't be imported. I did a fresh install from USB, vsx_util reconfigure, but I ran into sk105441. The SK indicates the issue is fixed in R81, so I left feedback in the SK as it's apparently not the case. Once this was addressed, the very final step of vsx_util reconfigure fails because of some communication issue. I had a productive session with TAC to get more logs for analysis.
I'm not sure now if I should have gone for R80.40 with the VSX. I've upgraded appliances running classical FW to R81 T44 and no issues there.
I did a R80.30 to R80.40 'clean' build with the latest Jumbo at the time and this went fine. There was an issue with in-place upgrade which was a nightmare and had to do a full rollback.
I've not gone to R81.x yet, but I would certainly feel like a clean install is the way to go. Personally I would probably go straight to R81.10 with JHFA9 or greater.
And most importantly ensure you have a pro-active case raised and get TAC on a zoom session with you (not some 1st liner but someone experienced with VSX)
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
| User | Count |
|---|---|
| 28 | |
| 20 | |
| 15 | |
| 5 | |
| 5 | |
| 5 | |
| 4 | |
| 4 | |
| 4 | |
| 3 |
Fri 12 Dec 2025 @ 10:00 AM (CET)
Check Mates Live Netherlands: #41 AI & Multi Context ProtocolTue 16 Dec 2025 @ 05:00 PM (CET)
Under the Hood: CloudGuard Network Security for Oracle Cloud - Config and Autoscaling!Fri 12 Dec 2025 @ 10:00 AM (CET)
Check Mates Live Netherlands: #41 AI & Multi Context ProtocolTue 16 Dec 2025 @ 05:00 PM (CET)
Under the Hood: CloudGuard Network Security for Oracle Cloud - Config and Autoscaling!Thu 18 Dec 2025 @ 10:00 AM (CET)
Cloud Architect Series - Building a Hybrid Mesh Security Strategy across cloudsAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY