Hi All,
My research on this topic has left me conflicted on exactly how run reports against older log files. Here's my scenario: SMS with all roles on one appliance (SmartEvent, Logging etc). Appliance was configured to delete indexes after 14 days. There was an audit request to provide security reports going back 100 days. Running reports against this timeframe came up blank (as expected).
Based on my research, here is what I did:
- Verified that there are logs for the time period in $FWDIR/log/
- Verified if there are indexes for the time period in $RTDIR/log_indexes/ (there wasn't, as expected only went back 14 days)
- Followed sk111766 to assure the days_to_index was set back to 100 days
- Configured the retention settings in SmartConsole to only delete indexes older than 100 days
- Backed up and Deleted $INDEXERDIR/data/FetchedFiles to force re-indexing
I did this a couple of days ago and I see the Java and log_indexer processes are still taking up significant CPU, so I am assuming indexing is still going on (according to documentation it can take several days - in my case it's a 525 appliance with spinning disks so not the fastest storage).
Is indexing as per above sufficient to populate and run reports against older logs (of course with the 100 day timeframe) or are there additional steps necessary? Specifically - does one need to follow the steps in sk98894 (How to run SmartEvent Offline Jobs for multiple log files) also.
Thanks,
Ruan