Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
genisis__
Leader Leader
Leader

I was wondering if anyone has actually deployed R81 in a VSX setup yet and if this has any  reported issues? 

I'm looking to upgrade my R80.20 VSX setup to R80.40 or R81, would like to move to R81 but I think it may be a little too early for this.

Also I know the recommendation is to rebuild, but in the current climate remote upgrade is preferred method.  So I will likely do an inline upgrade; from what I can tell kernel version would get upgraded, multi-queue turned on and other parameters turned on by default such are CORE load balancing parameter (SK168513).

Clearly new filesystem would not get used, but I don't see this being hugely important at the gateway side (happy to be educated on this if I'm wrong).

 

0 Kudos
1 Solution

Accepted Solutions
AmitShmuel
Employee
Employee

Sure, R81 JHF GA Take 58 has Dynamic Balancing VSX support. We've updated Dynamic Balancing SK as well.

View solution in original post

36 Replies
Alex-
Leader Leader
Leader

I have a large VSX on R80.30 3.10 environment and my plan is to upgrade management to R81 when the first HF is GA and the VSX to R80.40 after NY.

When it comes to VSX, I prefer to have a few levels of HF after the version becomes widely recommended, but I suppose it depends of your environment's complexity and the features you need.

0 Kudos
Magnus-Holmberg
Advisor

I do agree with @Alex- here, when it comes to VSX i would wait a few HFA, normally i do wait for HFA above 100 for VSX.
Regarding R80.20 personally i think thats a pretty bad release and i would upgrade more or less directly after the holidays to an R80.30 3.10 or R80.40 (if needed)

As far as we are told from our ATAM and PS we have worked with, filesystem on gateway side dosn´t really matter.

Regards,
Magnus

https://www.youtube.com/c/MagnusHolmberg-NetSec
0 Kudos
genisis__
Leader Leader
Leader

thanks guys, this pretty much falls in line with what I was thinking.

0 Kudos
genisis__
Leader Leader
Leader

Guys,

 

Note there is a bug in R80.40, related to logical interfaces not moving from VS0 to the relevant VS.  This apparently is a change in kernel 3.10.  Checkpoint have a fix (goes on top of Take_91).  I believe the fix is going to get integrated into a Jumbo.

The issue experienced is in-complete macs when reviewing the arp table on the VS's.

0 Kudos
Alex-
Leader Leader
Leader

Do you have more information about what happens and in which conditions? I'm using R80.40 VSX Take 89 at a customer (plan to go to T91 in the coming days) and didn't get that kind of issue, at least that I know of.

0 Kudos
genisis__
Leader Leader
Leader

If your already on R80.40 then I don't believe the issue is seen as you would already be running kernel version 3.10.  The issue seems to appear when moving from kernel version 2.9.18 to 3.10.

The upgrade process went fine, no errors (and Checkpoint where involved) however when we came to do testing, we noticed a number of connectivity issues; after investigation we determined that a number of logical interfaces where not seeing mac addresses.  

TAC where not able to resolve this so information was gathered and we had to do a full roll back (Checkpoint seriously need to look into supporting the command 'vsx_util downgrade' option as well).

TAC then engaged R&D who investigated the debug files and determined an issue which was a bug, as a result a hotfix has been generated.

We are re-attempting this week, but we are going to do a clean build rather the in-place upgrade.

0 Kudos
Chris_Atkinson
Employee Employee
Employee

Support for 'vsx_util downgrade' was introduced coinciding with the release of R81.

It's now also more widely supported as I understand per sk166053

CCSM R77/R80/ELITE
0 Kudos
genisis__
Leader Leader
Leader

Awesome! Finally

0 Kudos
Chris_Atkinson
Employee Employee
Employee

Hi Alex,

sk171753 is something to be aware of if using Virtual Switches / Routers.

CCSM R77/R80/ELITE
0 Kudos
genisis__
Leader Leader
Leader

This is dame handy to know!  But we where seeing the interfaces;  This said the the date of creation and update sounds like this could have been created as a result of our issue.

I confirmed with TAC that this was indeed created as a result of the issue we faced.

0 Kudos
genisis__
Leader Leader
Leader

Alex separate question and perhaps we can take it offline if required, but what the performance like?  One of the reason for moving from R80.20 to R80.40 is to see if our overall performance issues are improved.

We have 15600 (rated to handle 10Gbps with 10 VS's according to the sizing tool) appliances and a total throughput of about 2.5Gbps, fwaccel reports majority of the traffic is being accelerated, but out overall CPU is really high compared to the about of throughput going through the appliances.

We have actually had to split the load across the two nodes because we start getting latency issue when everything is running on one node.

0 Kudos
Alex-
Leader Leader
Leader

Obviously there are many factors into play when comparing performance, I'm using 16200 appliances (48 cores, 64GB RAM), if I recall correctly the 15000 series have 8 cores so 16 in HT and 8/16GB RAM?

With a mix of 10/40 GB adapters and SND set to 8 instead of the default 4 (CP recommendation), everything runs fine with the latest HF in terms of pure performance with a mix of VS running different blades. Even though the customer is doing gradual migration of their network to full segmentation behind this cluster, I don't see performance issues arising.

0 Kudos
genisis__
Leader Leader
Leader

15600 appliance has x2 CPUs  (16 cores each) so a total core count with HT of 32.  

We current have the default 4 cores for SND and there is nothing to suggest these are even breaking a sweat (using some of Tim's handy advise to check things)

We have one node running one VS running FW/IPS blades only (We did have AV/ABOT turned on but this started causing perfomance issues).  The concurrent connections on this is around 250K. I've allocated 12 cores to this.

attached is a screenshot of the CPU usage.  Considering all of the above I would expect 4 cores max needed for this, and the total percentage utilisation to be below 20%.

Also note that the SND core utilisation is in the 10% and below section of the screenshot.

The other node is running the rest of the VS's  (5 VSs) and has a throughput level of about 1.5 - 2Gbps, and CORE utilisation, however concurrent connections are lower.

0 Kudos
Chris_Atkinson
Employee Employee
Employee

Was this upgrade in-place from an earlier version, is dynamic dispatcher enabled?

CCSM R77/R80/ELITE
0 Kudos
genisis__
Leader Leader
Leader

Clean build of R80.10 > In-place upgrade to R80.20, and dispatched is enabled with the specific kernel parameters enabled in the fwkern.conf file.

0 Kudos
genisis__
Leader Leader
Leader

We have now completed the build to R80.40 with JHFA91 and specific fix related to SK171753.  We did try to allocate more SNDs  (total of 6 cores) but the system did not report the correct number of SNDs.  TAC are suspecting another bug, but this may be cosmetic - still under investigation.

Also I did not realize that in order to increase the SNDs COREXL must be enabled in VS0,  if you disable this it reverted back to the default of 4 cores.   I though you should always disable COREXL on VS0.

 

Would be nice if Dynamic split was supported in VSX, in this way we would not have to think about this.

0 Kudos
Alex-
Leader Leader
Leader

How did you increase your SND? I did it with affinity commands and never had to enable CoreXL on VS0.

0 Kudos
genisis__
Leader Leader
Leader

According to TAC we have to enabled VS0, set the number of FWK cores, reboot and leave COREXL on.  If we disabled COREXL the core split goes back to default 4/28.

Did you do fw affinity -s?  I've just left my at auto.

We are running just under 1Gbps throughput and the core utilisation is about 50%, (and we have only have FW/IPS/URLF/APP Control turned on, AV/ABOT still left to turn on); so looking at it simply, this feel like to high core utilisation of core compared to the throughput.

0 Kudos
Alex-
Leader Leader
Leader

Yes, affinity -s -d then reboot, came up with the new SND/workers distribution without CoreXL.

Confirmed also in cpview.

0 Kudos
PhoneBoy
Admin
Admin

We are planning to support Dynamic Split in VSX in a future version, as far as I know.

0 Kudos
AmitShmuel
Employee
Employee

Correct, VSX will be supported in the upcoming R81/R80.40 JHFs.

Hari
Explorer

R81 recommended for VSX  ?

 

0 Kudos
PhoneBoy
Admin
Admin

R81 is widely recommended for all deployments (including VSX).

0 Kudos
ShaiF
Employee
Employee

Hi Genisis,

This is correct, Fix is under QA evaluation these days and should be part of upcoming JHFs.

Regards,

Shai.

0 Kudos
genisis__
Leader Leader
Leader

Nice - in the means time we have had an odd issue when we tried to increase the SNDs. The TAC engineer is going to raise a new ticket for this with R&D.

So basically in a future (hopefully soon) dynamic split will be available for VSX, correct?

0 Kudos
AmitShmuel
Employee
Employee

Correct.

0 Kudos
Massimo_Manzato
Participant

Hi Shai,

Can you please tell us from which JHF R81 support  Dynamic Balancing  in VSX?

image.png

 

 

Many thanks

Massimo

0 Kudos
Chris_Atkinson
Employee Employee
Employee

@AmitShmuel Are you able to confirm the JHF take please?

CCSM R77/R80/ELITE
0 Kudos
AmitShmuel
Employee
Employee

Sure, R81 JHF GA Take 58 has Dynamic Balancing VSX support. We've updated Dynamic Balancing SK as well.

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events