@_Val_ , our initial problem was addressed here and after that as I needed more clarification, I asked here for more clarifications.
So, we observed on our GW's that randomly (once a day or every 2 days) we get a HUGE amount of connections - like 300K to the HIGHEST 1MIL connections .
This was handled by the appliance, but still we were seeing impact with some legitimate traffic and increase in the memory utilization.
From there, we started to look on a way to determine the connections that were determining this behavior, and the only feasible way, was to go with a script that takes the connection list from "fw tab -u -t connections -f" and sorts it by SRC and separately by DST . So we have developed a BASH script that get's triggered if "fw ctl pstat | grep Concurrent | awk '{print $3}'" is over the 150K connections (or 200K connections) .
Now that we have the TOP 10 from the bash script, we looked further and filtered the connections by those TOP X IP's determined above, and we've seen that in the last month, the HIGHEST connections were against our public DNS server ( seen on all continents/DataCenters) .
So right now we have a plan and a way to determine what is causing our problems.
In order to limit the impact of this RANDOM HUGE DNS traffic, we implemented some fast_accel rules (specifically for the DNS traffic), so we don't load too much the appliance for now (not sure if it's GOOD or BAD approach).
While analyzing data, we also implemented some fwaccel rules, where we capped the number of new connections per IP to 500 (not sure if it's too high or too low) - still the fwaccel action is set to Notification right now - as we're looking into the best way to block this without impacting GOOD traffic.
So, in the end, all this was done pretty much with scripting (watching other tolls return-data), and I was hoping we could get something with the SmartEvent or Logging solution, or a way that we can count (for a predefined time) connections and report them somehow - maybe it can be included in Skyline 😊 .
Ty,
PS: sorry for the looong post...