Curious if any one has some insights on when to use Log Distribution when you have two or more log servers.
Under the each GW object => Logs, there is a Log Distribution option when you have multiple loggers defined.
- "Send a copy of every log to each of the primary log servers" - with logs going to defined backup server if primary fails
- "Distribute logs between log servers for improved performance (applies to primary and backup log servers)"
What I am curious to understand is for the 2nd option and where performance is affected from both the log server AND GW side.
I can see from the log server side where performance can be improved with each of the log servers defined under the 'primary' section being balanced out so not one server is taking all of the load.
But.....does that come at a performance cost from the GW side? Does the GW have an increase of memory and/or CPU by maintaining multiple log server connections and what is sent to each.....verses with going with option 1 where the GW only has to maintain one log connection and just forward?
I have concerns with the memory usage on the GWs here and if it consumes more as I have a lot of 3000 series appliances where memory is both low (8GB) and can't be added.
If there is a known hit to this, I am more willing to go with option 1 here and just 'split' the primary/secondary log servers across the fleet so there is some 50/50 hard split (understanding that it would not be truly balanced from the log server side with log rates able to change dynamically from each GW).
In the middle of a complete management server/log server migration project and would like to make sure I have made the right choice 🙂 We had the option 1 with the hard split up and moved to option 2.....and noticed one of my standby cluster sites have a large increase of memory right after policy install. Want to sure its not an anomaly or if I am going to have some increased memory loads gradually across the infra with this deployment.
Appreciate any immediate input or past experience on this subject 🙂