Bit of a philosophical question.
There are many ways to filter your internal traffic going out to internet, i.e.
- IP based filtering
- good old explicit proxy, with or without TSL interception
- transparent proxy / gateway with TLS interception
- or combination of both
All have pros and cons. IP based being least efficient. Explicit proxy often is a burden to automation or impossible to apply in certain instances, whereas transparent option with TLS interception is less "visible" to client itself but issues with certificates keep causing headaches plus interception is resource intensive and expensive.
One option to avoid these challenges would be using "HTTPS lite" or Categorization of HTTPS sites without HTTPS inspection. So clients don't need to specify a proxy nor there is a "man in the middle" messing with certificates.
But of course the downside is the information available in logs - you don't get full URLs, but service names worked out from TLS handshake as seen below. It does limit your ability to determine all risks associated with that connection.
Would you accept this as a"sufficient information" log in your organisation? As highlighted above, classification is not 100%. Is that OK? 🙂
just wondering how you do it 🙂