Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Emanuel_Miut
Participant

Moving from 4400 (77.30) to 6500 (80.20)

Hi,

 

We have a 4400 cluster(77.30) and planning to move to 6500 cluster (80.20).

Management server was already moved to R80.20. New appliances 6500 were installed using isomorphic with R80.20

For GW replacement I was thinking at the following steps.

 


1. Export configuration from 4400 appliances - show configuration and then  save configuration to file;
2. Import configuration to 6500 appliance - paste commands from 4400 appliances, verification of interfaces ;
3. On management server, modify gateway object with 6500 apliance hardware and change software to R80.20;
4. Establish SIC with 6500 appliances;
5. Install policy on 6500 appliances.

 

Are there any more steps to take into consideration? 

 

Regards,

13 Replies
Wolfgang
Authority
Authority

Emanuel, sounds like a good way. before you import your old configuration check the clish network commands for any reference to old MAC-addresses, check your interface assignments (eth1 eth0 .....) Some clish commands are little bit different with R80.20, but you can see and solve this if you run these commands. regards Wolfgang
HeikoAnkenbrand
Champion Champion
Champion

Befor you load the config set the following parameter in clish:

set clienv on-failure continue

 

➜ CCSM Elite, CCME, CCTE ➜ www.checkpoint.tips
HeikoAnkenbrand
Champion Champion
Champion

Use the following steps to import the config on 6500 appliance:

> set clienv on-failure continue
> load configuration <clish_script_name>
> set clienv on-failure stop
> save config

➜ CCSM Elite, CCME, CCTE ➜ www.checkpoint.tips
HeikoAnkenbrand
Champion Champion
Champion

 

1. Export configuration from 4400 appliances - show configuration and then  save configuration to file;
2. Import configuration to 6500 appliance - paste commands from 4400 appliances, verification of interfaces ;
3. On management server, modify gateway object with 6500 apliance hardware and change software to R80.20;
4. Establish SIC with 6500 appliances;

>>> Between step 4 and 5 add the licence via SmartUpdate.

5. Install policy on 6500 appliances.

 

 

➜ CCSM Elite, CCME, CCTE ➜ www.checkpoint.tips
0 Kudos
HeikoAnkenbrand
Champion Champion
Champion

What‘s new in R80:

R80.10 and above offer many technical innovations regarding R77.

- new fw monitor inspection points for VPN (e and E)

- new MultiCore VPN - UP Manager

- Content Awareness (CTNT)

...

R80.20 and above:

- SecureXL has been significantly revised in R80.20. It now works in user space. This has also led to some changes in "fw monitor"

- There are new fw monitor chain (SecureXL) objects that do not run in the virtual machine.

- Now SecureXL works in user space. The SecureXL driver takes a certain amount of kernel memory per core and that was adding up to more kernel memory than Intel/Linux was allowing.

- SecureXL supportes now Async SecureXL with Falcon cards

- That's new in acceleration high level architecture (SecureXL on Acceleration Card): Streaming over SecureXL, Lite Parsers, Scalable SecureXL, Acceleration stickiness

- Policy push acceleration on Falcon cards

- Falcon cards for: Low Latency, High Connections Rate, SSL Boost, Deep Inspection Acceleration, Modular Connectivity, Multible Acceleration modules

...

R80.30 and above:

- In R80.30+, you can also allocate a core for management traffic if you have 8 or more cores licensed, but this is not the default.

- Active streaming for https with full SNI support.

...

➜ CCSM Elite, CCME, CCTE ➜ www.checkpoint.tips
0 Kudos
Emanuel_Miut
Participant

>>> Between step 4 and 5 add the licence via SmartUpdate.
Thank you, Heiko for this. I believe also that I have to detach the old license and install the new ones.
0 Kudos
Dmitry_Krupnik
Employee Alumnus
Employee Alumnus

Hello Emanuel,

Don't forget to move the simkern.conf and fwkern.conf to the new Cluster (if you have these files on your 4400 Cluster)

  • $FWDIR/modules/fwkern.conf
  • /opt/CPppak-R77/boot/modules/simkern.conf

Important to note, that location of the simkern.conf file in the R80.20 was changed. The new location is $PPKDIR/conf/simkern.conf

Regards, Dmitry Krupnik

Emanuel_Miut
Participant

Thank you Dmitry, I will check for those files. That's a step that I didn't take into consideration. 

 

0 Kudos
Wolfgang
Authority
Authority

Making a copy of the files for kernel parameters will be a great idea to save these for setting later.

But I would start without these, there are a lot of improvements with R80.20 and some kernel parameters are now default and some no more needed. If you go sure you can check the behaviour of your setting with the new release.

One more to check...If you have MobileAccessBlade enabled and did some changes to the design of the MOB-webpages ( like changing color, footer or header, removing buttons or adding own copyright etc) you have to copy these changes to the new gateway.

Wolfgang

0 Kudos
TomasFy
Explorer

Hi,

In case of Mobile Access Blade - if the GW is gateway for VPN mobile users - you need to copy any changes introduced to $FWDIR/conf/trac_client_1.ttm file - especially any MEP or Location Awareness configuration.

0 Kudos
Emanuel_Miut
Participant

This is not the case here, but thank you for your input. Might be really valuable for future migrations.
0 Kudos
Dmitry_Krupnik
Employee Alumnus
Employee Alumnus

Emanuel,

If you have questions about content of these files, send me them in the private message and I will help you to understand, which parameters should be kept and which could be removed.  I still recommend to keep all content, but I agree with Wolfgang, that will be good to review and remove redundant parameters if any.

Regards, Dmitry Krupnik

0 Kudos
TA_05
Participant

I can confirm that this should work swimmingly, I just performed nearly the exact same migration.  Just a handful of minor differences, I moved from clustered 4800's on R80.10_take154 to clustered 6500's on R80.10_Newesttake and installed 10g cards into the 6500's. Also just imported the configuration of the 4800's and loaded them on the 6500's.  I was able to perform this migration with nearly zero downtime for the cluster.

 

A few takeaways, I'll echo the verification of simkern and fwkern.conf files.  Don't load R80.10_take154 on the 6500's (was only attempting this for consistency in environment), as when I did load that take on the 6500's all of the interfaces got a bit wonky and renamed themselves to eth1_rename so on and so forth.  I powered off the second member in the cluster and then plugged the sync cable from the active into the 1st new member (6500), updated the model number in the smart dashboard, established sic with the new member, then pushed policy to the cluster (making sure to uncheck the box for "For gateway clusters" as I knew would fail on already running member).  Once the policy successfully pushed to the new member I verified the new gateway was active attention and that the magic mac was the same on the new member. Then powered down the last 4800 and watched the 6500 become active and started working exactly as expected.  Last piece was to the move the sync cable to the second 6500 member and it was rinse and repeat from there.  I know this methodology is not checkpoint approved or advised more than likely but I can confirm in my case all went as planned.

 

Hope yours goes as well as it did for me!

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events