Based on that enabled_blades list I'd say it is almost certainly IPS that is the culprit.
The sar command has 30 days of history built in. To access the network counter historical data you would do this:
sar -n EDEV -f /var/log/sa/saXX (XX is the day number you want to see, so July 26 would be 26.)
Might also be interesting to see CPU loads and where the cycles were being spent during the disruption (us, sy, wa, etc.), so also run sar -f /var/log/sa/saXX and post the results. You can also try poking around with cpview in history mode (which also keeps 30 days of historical data), cpview -t will put you in history mode then you use + and - to move forward or backward in time. If you post the sar data please also mention exactly when that day the disruption occurred.
If the SSH session connects at the TCP level but you cannot log in during the disruption, generally that means all CPUs are being monopolized in the kernel by INSPECT/SecureXL (si/sy space), and the sshd daemon running up in process/user space cannot get enough CPU cycles to service your login request after the TCP/22 connection was initially created by the kernel.
If you can figure out a way to cause or reproduce the disruption, try running ips off beforehand and see if that helps. Note that doing this may expose your organization to attacks for the time period IPS is disabled, and don't forget to turn it back on with ips on when done!
New 2-day Live "Max Power" Series Course Now Available:
"Gateway Performance Optimization R81.20" at maxpowerfirewalls.com