Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 

R81.10 Jumbo Hotfix Accumulator - New GA Take #22

eranzo
Employee
Employee
0 18 7,453

eranzo_0-1641121487010.png

Hi All,

 

R81.10 Jumbo HF Take #22 is now our GA take and is available for download to all via CPUSE (as recommended) and via sk175186.

List of resolved issue in this take can be found in sk175186.

 

Thanks,

Release Operations group

 

18 Comments
Fung_To_Puk
Participant

Hi,

I just tested it in VM environment, there are many problems after installing this JHF, we see the fw_fill and scenengine_b generating coredump from time to time, also, the CPView missing many information e.g. on the summary screen no disk usage shown, in the UP page there are no connection statistic shown, etc.

Marcel_Gramalla
Advisor

I can somewhat agree on @Fung_To_Puk as I also don't see any disk usage in CPView after the upgrade. I haven't seen any dumps in my very short test. Tested on a Google Cloud Deployment.

Fung_To_Puk
Participant

Hi,

 

I had tried a bit more, my lab environment is VMware with 1 mgmt and 1 gateway both 4 cores and 8GB ram, and we noticed that if we increase the antivirus/antibot schedule update time from 1 min to 10 mins there will be less dump for the fw_full and scanengine_s

Arik_Ovtracht
Employee
Employee

Hi,

there was indeed an issue with CPview in the Jumbo HF, causing the disk space values to be incorrect.

This issue will be fixed in the upcoming Jumbo take, planned to be released during the Feb 2022 time frame.

genisis__
Leader Leader
Leader

Other then cosmetic issues are there any actual issues that break anything?  I did note the JHFA22 went to GA very quickly.

Fung_To_Puk
Participant

Hi,

We also notice there are core dump for scanengine_b, scanengine_s, seems to related to the antivirus nad antibot update schedule as we increase from 1 min to 10 min and to 1 hour, the number of core dump is reduced.

Also, the mobile access seems to have problem over a period of time as we tested we would not able to connect/re-connect after around half a day, which require reboot or cpstop cpstart, but traffic seems going ok

the_rock
Legend
Legend

Maybe I got lucky with it, no issues so far 🙂

JozkoMrkvicka
Authority
Authority

As Take 22 of R81.10 is second GA released, why is R81.10 still not declared as default version widely recommended for all deployments?

From FAQ section of Check Point Releases Terminology:

  • Should I upgrade to R81.10?
    R81.10 brings significant improvements in security operation efficiency across the management server's reliability, performance, and scale. As part of Scalable Platforms, R81.10 brings a unique mix and match ability to leverage different Quantum security gateways within a single Quantum Maestro security group.
    This release is initially recommended for customers who are interested in implementing the new features. We will declare this version as the default recommended for all deployments, typically following the 2nd GA Jumbo Hotfix release.
    It will then be available in the 'Recommended Packages' section in the CPUSE tab in the Gaia portal. Our current plan to declare R81.10 as default recommended is Q4 2021.
    For “What’s New”, Release Notes, and more information, refer to R81.10 Homepage.

 

_Val_
Admin
Admin

@JozkoMrkvicka it is not only the matter of JHF availability, but also a matter of field adoption. We are looking for a certain adoption threshold before bumping a particular version to recommended.

shlomip
Employee Alumnus
Employee Alumnus

Hi @JozkoMrkvicka ,

It is a matter of days until R81.10 will be declare as default recommended 

Indeed we plan to declare it by EO Q4 2021 and we are a bit late there.

We are now closing last items and we hope to communicate it very soon

the text you mentioned from the sk above will be updated shortly as well

Stay tuned!

Fung_To_Puk
Participant

Those are the segfault in the /var/log/messages we saw in lab

no problem if we revert to jhf T9

2022-01-05_215648.png

shlomip
Employee Alumnus
Employee Alumnus

Hi @Fung_To_Puk ,

Which R81.10 JHF take  is it? 22?

Can you share a support ticket in case you opened one?

Shlomi

the_rock
Legend
Legend

I checked for any sefgault messages, they are non existent. I dont know, maybe I got luck with this hotfix, I dont have any issues mentioned here.

Fung_To_Puk
Participant

Hi,

 

I did not open ticket as this is just VM lab setup to test the r81.10 with latest JHF with most blades on, so we will still recommend our customer remain on JHF T9 for now

shlomip
Employee Alumnus
Employee Alumnus

Hi @Fung_To_Puk ,

We don't have any reports on such issues with take 22, if you still manage to see that again ping us

so we can assist, but a Support ticket will also be needed in such a case

There are important fixes in those takes that we recommended to have 

 

shlomip
Employee Alumnus
Employee Alumnus

@Fung_To_Puk ,

Can I ask about 

"mobile access seems to have problem over a period of time as we tested we would not able to connect/re-connect after around half a day"

Are you referring to the Mobile Access blade or to the layer-3 VPN clients?

Do you happen to have an open ticket on this issue?

 

Shlomi

Fung_To_Puk
Participant

@shlomip 

No, we test everything in our lab first, so no ticket were opened, and the VPN are mobile access blade, we test it with android, ios and windows client with endpoint security. but this vpn problem is started since r80.10 which the android and ios client when using ipsec connection, it sometimes would change from IPsec/UDP to IPsec/TCP then no traffic would be received until reconnect and change back to IPsec/UDP, windows client doesn't have this problem, so i am not sure where did it gone wrong.

But r81.10 with jhf 22 seems have additional problems such that even using ssl connection it sometimes would not receive data on android and ios (but its still show connected on the mobile app), it will resume after some waiting.

shlomip
Employee Alumnus
Employee Alumnus

@Fung_To_Puk ,

Thanks for that.

I am copying some people about this topic

@Nadav_Feigenbla , @Oren_Souroujon , @idants 

Labels