Hi @_Val_,
I'm just trying to "adapt" the script you linked to my use-case. I've made some changes and everything works well, but I have some problems with blacklists containing a high number of IPs because the API session expires.
For instance, the issue occurs if I try to import the FireHOL Level 3 list (containing more than 17K IPs). Please see the script attached.
I've also added a session-timeout of 1 hour to the login call (line 62):
mgmt_cli login user $v_cpuser password $v_cpuserpw session-timeout 3600 --format json > id.txt
In doing so, the script could import more IPs, but not enough to complete the whole list. Indeed, after around 3K-4K IPs, the session always expires:
code: "generic_err_wrong_session_id"
message: "Wrong session id [oLZge4cBkVQqZSYdLHX0awi3p9PsXnW-VmINXBjMcoc]. Session may be expired. Please check session id and resend the request."
In order to avoid the expiration, I've also added a keepalive before each addition of network object (line 116):
... { print "mgmt_cli keepalive -s id.txt > /dev/null 2>&1; ...
Unfortunately, nothing changed.
Furthermore, in order to save changes "step-by-step", I've added a publish action every 500 additions of network object (line 118-119):
awk '{print;} NR % 500 == 0 { print "mgmt_cli publish -s id.txt"; }' $v_diff_add_sh > $v_diff_add_sh_awk
mv $v_diff_add_sh_awk $v_diff_add_sh
Do you have any suggestions to keep "alive" the session? I can't understand why it expires if there's the keepalive before every network object addition.
Thanks,
Francesco