Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
RamGuy239
Advisor
Advisor

$MDS_FWDIR/scripts/migrate_server export with logs and indexing data causes gzip to run at 100%

Hi,

I'm trying to utilise the capabilities of the new upgrade tools that were made available from R80.20+ to take a single-domain management export containing logging and indexing data.

This is moving from R81 to R81.10. The server is running virtually on VMware ESXi and is running R81 with Take 36 installed and contains the latest build/versions of both the R81 upgrade tools package and R81.10 upgrade tools package from sk135172. These are being updated automatically as the management has connections towards updates.checkpoint.com.

I'm also using sk163814 and the R81 and R81.10 admin guides.


The process is quite simple. Use $FWDIR/scripts/migrate_server export -v R81.10 /path/filename.tgz for export without logging data, add the parameter/flag -l for including logging data and use -x for including both logging data and indexing data.

I've utilised the migrate_server script a bunch of times. It's not anything new to me but utilising -l or -x is something I haven't done previously.

-x requires R80.20+, and if you are moving from pre R81 to post R81 you can't use -x as the indexing data on R80.20-R80.40 and the indexing data on R81+ is not the same making you unable to use -x. You have to use -l and get only the logging data.


In my scenario, I'm moving from R81 to R81.10 so both -l and -x should be compatible.


This isn't working out too great for me. First of all, there doesn't seem to be any option to tell migrate_server the number of days it should include. On bigger management servers this means that it will try to swallow all the logging data. In this particular case that would mean several hundreds of gigabytes of logging data going back several months. That's not needed and would obviously take forever and result in a .tgz file that is huge.

I couldn't find any other way to get past this other than simply reducing the logging data available on the management to the amount of days we want to contain within the export. So we had to take a VMware ESXi snapshot, then I removed logging data making us only have the last month left before I did the export.

Am I missing something here? Shouldn't there be a parameter/flag letting us define if we want to include x amount of days or x amount of months? We had to get creative here.


Sadly even after reducing the logging data to 1 month the export is not working. We tried both using -x and -l, we did also try to reduce the amount of logging and indexing data available down from 1 month to 10 days. But it's still the same.

The script gets to 18% exporting domains which seems to be the step where the logging and indexing data gets moved. When looking at the temp files being created in our output directory the temp tgz file gets to around 20GB and then the script ends with an error and GZIP has been going at it with 100% the entire time, and keeps being stuck at 100% even after the scripts ends with a error and all temp files has been automatically removed.

The upgrade-XXXX.html log doesn't really say much:

System Data3 h.
Error Message  Failed to execute command /opt/CPupgrade-tools-R81.10/bin/migrate export --config_only -n -x --smartlog --optimize_space_usage /var/log/ocdnorway//migrate-11.08.2021.103420//a0eebc99-afed-4ef8-bb6d-fedfedfedfed. timeout while waiting for command: [/bin/bash, -c, /opt/CPupgrade-tools-R81.10/bin/migrate export --config_only -n -x --smartlog --optimize_space_usage /var/log/ocdnorway//migrate-11.08.2021.103420//a0eebc99-afed-4ef8-bb6d-fedfedfedfed]
Export Time   10:34:20 - 13:34:20


We seem to  get the same kind of error regardless of us using -l or -x, and regardless of us having 1 month or 10 days og logging data on the server. A regular export without utilising -l or -x is working just fine.


I can't find anything on the usercenter or within the admin guide that tells anything about any maximum filesize, amount of days or anything. We have plenty of space on both lv_current and lv_log so the server is not even close to running out of anything during the process as it all seem to crash when the file is around 20GB in size. We have 60GB free on lv_current and 1.6TB free on lv_log.

Certifications: CCSA, CCSE, CCSM, CCSM ELITE, CCTA, CCTE, CCVS, CCME
0 Kudos
6 Replies
PhoneBoy
Admin
Admin

Getting the logging data with migrate/migrate_server is an all or nothing proposition.
That said it’s generally better to move the logging data separately due to the difficulties of moving and processing exceptionally large archive files.

RamGuy239
Advisor
Advisor

I ended up simply moving 10 days of logs and indexes manually. It was about 60GB in size. I have a hard time understanding the point of the -l and -x option if it can't even handle 10 days of logging files at a total of 60GB in size. In what scenario would these two flags make any sense? 

Certifications: CCSA, CCSE, CCSM, CCSM ELITE, CCTA, CCTE, CCVS, CCME
PhoneBoy
Admin
Admin

In environments where you have that amount of logs, those flags don't make sense.
The flags are useful when the amount of logs to migrate is relatively small.

0 Kudos
Timothy_Hall
Champion
Champion

Unfortunately the implementation of gzip is single-threaded (not multi-core) and comprises quite a bit of the time required for upgrades, jumbo HFA applications, and SMS config migrations to complete.  Dozens of cores don't help when these procedures get held up by gzip pounding the crap out of a single core; I see it every day in the CCSE course labs and it is quite annoying as the perception is that these operations are way too slow even on top-tier hardware.

pigz is the multi-threaded replacement for gzip and included with Gaia 3.10; I hope Check Point will start using it instead of sticking with gzip which has changed very little since the 1990's.  Obligatory  @PhoneBoy tag to R&D...

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
PhoneBoy
Admin
Admin

@Dima_M think this is something we can add in the migration tools?

0 Kudos
Itai_Minuhin
Employee
Employee

Hi All, 

My name is Itai and I'm a R&D team leader responsible for the upgrades of Management Servers and Log Servers.

I'm sorry for the bad experience you had when trying to upgrade the logging data. Thank you for your important feedback.

We will take this feedback together with the logging team and will provide a proper solution and documentation. 

 

Best regards, 

Itai 

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events