- CheckMates
- :
- Products
- :
- Developers
- :
- API / CLI Discussion
- :
- API performance optimization
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Are you a member of CheckMates?
×- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
API performance optimization
When creating new objects, it costs over 10 mins to create around 200 objects.
Is there anyway to improve API performance when creating objects and ploices?
Cheers.
- Labels:
-
General
-
Object Management
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The basic flow of the API is the following
When executing a single command, for example "add host" using the mgmt_cli without calling the login command first. The mgmt_cli executable will in the background preform the following steps. [login] -> [add host using session ID from login] -> [publish] -> [logout].
To control this behavior you can call the login command first, retrieve the session ID, reuse the session ID in all your changes, for example add multiple host objects, publish all your changes at once by reusing the session ID and logout.
If you execute the command mgmt_cli add host with the batch flag [-b] the executable will for example create all 200 hosts in one session and then publish all the changes at the end.
For example by using the batch flag in the executable it took me ~2 minutes with one publish using one command to add 200 hosts on my test machine a R80 management server running in VMware workstation on a Laptop
After publish the hosts are available for the other administrators
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Leon,
We will contact you soon to get the details.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I noticed this when bulk-creating objects at a customer site with the R80 mgmt CLI. Our workaround was to ensure everyone was out of the SmartConsole before starting the bulk object creation from the management CLI. It ran much faster that way. I think it has something to do with the fact that a publish occurs after every single object is created, and when multiple administrators are in the SmartConsole it seems to have to stop and wait for the publish to reach all SmartConsoles before moving on to the next object.
--
My book "Max Power: Check Point Firewall Performance Optimization"
now available via http://maxpowerfirewalls.com.
CET (Europe) Timezone Course Scheduled for July 1-2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The basic flow of the API is the following
When executing a single command, for example "add host" using the mgmt_cli without calling the login command first. The mgmt_cli executable will in the background preform the following steps. [login] -> [add host using session ID from login] -> [publish] -> [logout].
To control this behavior you can call the login command first, retrieve the session ID, reuse the session ID in all your changes, for example add multiple host objects, publish all your changes at once by reusing the session ID and logout.
If you execute the command mgmt_cli add host with the batch flag [-b] the executable will for example create all 200 hosts in one session and then publish all the changes at the end.
For example by using the batch flag in the executable it took me ~2 minutes with one publish using one command to add 200 hosts on my test machine a R80 management server running in VMware workstation on a Laptop
After publish the hosts are available for the other administrators
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I figured there had to be some way to submit a series of transactions from the Management CLI and then publish them all at once (similar to a start...commit in clish) but couldn't figure out how to do it. Thanks for the clarification.
--
My book "Max Power: Check Point Firewall Performance Optimization"
now available via http://maxpowerfirewalls.com.
CET (Europe) Timezone Course Scheduled for July 1-2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for the clarification.
However, we need to meet the requirement for bulk-installing 5000 objects, within certain time frame.
The result for our lab is 200 objects in 2 minutes.
Do we have recommended number for 5000 objects?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
More explanations for my question.
In this project, our competitor can install 5000 objects with 2 minutes through API, because they just rewrite the configuration file.
Do we have similar way to improve API performance?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can your competitor verify that the added objects do not break the policy? Rule Hide Rule? No duplicate IP's? No broken UID references? Generally Check Point chooses reliability over speed. Validating the security configuration is the cause for the time. And we want to assume that adding 5000 new objects is not an every-day case - if it is, perhaps Dynamic Objects might be more suitable.
Let me know your thoughts on this.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You may want to follow sk119553 to increase the amount of memory for API depending on your hardware. By default it is 32-bit with 256MB. We recently modified to 64-bit with 4GB and performance has improved greatly.
Previously it was very slow for us well at processing.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Right. The default memory configuration is not enough for big databases.
And for 32-bit env. you may increase to 1GB (max. up to 2GB).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Can I restrict a user to NOT be able to use detail-level "full" ?
