I realize this is a very subjective question as it all depends on your environment, but I've noticed a dramatic rise in log files over the years as we've migrated from an R77 environment to R81.10.
We log basically two datacenters in our single log server with one of them getting the majority of the Internet traffic inbound/outbound. We have the server set to rotate the logs every 2 GB. We probably are generating one 2 GB file every 20-30 minutes. Yesterday alone we generated 52 2GB log files. And so far on our 4 TB log server we've generated 2.3T of fw logs since Jan 13.
Just to add context, I opened a support case back in October 2016. At that time we were struggling because we were generating 24GB of logs a day. And at that time that was a significant increase from what we had seen in the past. Mind you, this is the same datacenter so the number of users has stayed constant over the years for the most part. We were able to get to about half of that by not logging telnet drops before the final drop rule.
Since then its risen to 40-50 GB a day (without the final drop rule log) and with the drop rule logging it's almost double that. I would say 66% of the logs are inbound Internet drops from our major datacenter. I used to be able to get a better count by using the older SmartView Tracker application and opening up a single file for analysis, but the log files being generated now don't play so well with that tool anymore. I just don't understand why several years ago we saw this dramatic increase and it has only gotten worse. Each 2 GB file is made up of approximately 8 million entries (based on SmartView tracker). So that's almost 260 bytes of data per entry and roughly 320,000 entries a minute.
Is this just normal? I know there isn't an exact size on log entries produced, but I just feel like we're drowning in data.