Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
VooDooChris
Contributor

URL Filtering Troubleshooting Help

Hi,

We’re currently looking to set up a set of rules that allow a server to access a very specific set of URL’s. Web Application and Web category filtering aren’t an option as they’re too broad a control and neither is just using a security blade rule as the URL’s are cloud hosted and so the underlying IP addresses are likely to be fluid.

By (simplified) example, we want to limit the server to be able to access a.site.com & b.site.com but not c.site.com, etc. We’ve set up a Security blade rule for the server to allow https out to “all” and have set up an Application blade URL filtering rule to allow the server access to a.site.com & b.site.com. Directly underneath that in the ruleset we’ve set up a block “all” rule for the server which I believe should result in the desired control.

The problem is that some of the traffic we expect to get through the allow rule get's blocked by the block rule and when we look at the firewall log entry for the blocked traffic there appears to be no rational given for the block e.g. it would be great to see something like “Blocked due to requested URL = c.site.com” or “Blocked due to cert CN = z.site.com” . We can see from examination of the server’s DNS client logs that sometime when the traffic is blocked it’s because the server is sometimes being redirected to other URL’s, but it would be good to understand the exact reason why the firewall has blocked the traffic and not have to deduce it from other sources of evidence. We’ve tried setting up extended logging on the rules but this hasn’t given us any more detailed information.

So, my question is, does anyone know if it’s possible to see more detailed information for traffic blocked by the application blade?

Thanks

Chris

0 Kudos
30 Replies
VooDooChris
Contributor

Sorry, should have mentioned, we're on R80.30.

0 Kudos
PhoneBoy
Admin
Admin

On the drop rule, what precisely does it say in the track field?
If you want to see the precise URL accessed it should be Extended Logging.

VooDooChris
Contributor

I can confirm that rule is set to "Extended Log" but I can't see a "Track" field on the "Details" tab when I double click the log entry in SmartConsole.  There is a "Description" field that states "https Traffic Dropped from x.x.x.x to y.y.y.y".

 

I fear I might be looking in the wrong place or don't have something required enabled?

0 Kudos
PhoneBoy
Admin
Admin

I meant the Track field in the rulebase.
Is the "Service/Application" in the drop rule any or something generic like Web Browsing?
I would have a separate drop rule for non-web traffic. 

VooDooChris
Contributor

Ah, sorry.  In the drop rule, the Track field is configured with "Extended Log" and "Accounting".  The allow rule just above it is configured in the same way.

The Services & Application field on the block rule is configured with "Any" and on the Allow rule just above it there is a custom object that lists the URL's and service we want to allow throught the firewall.

All traffic generated by the source server should be HTTP/HTTPS.

Thanks

Chris

0 Kudos
PhoneBoy
Admin
Admin

"Any" doesn't force traffic identification, but I believe "Web Browsing" will.

VooDooChris
Contributor

To test out the “Web Browsing” category, we set up the following rules in the app blade:

   >  Rule 4 – Src: Test server  Dst: Internet (not RFC1918)  Srvs&App:  Custom URL list and TCP 80/442/8080  Act: ALLOW  Log: ExtLog & Acc

   >  Rule 5 – Src: Test server  Dst: Internet (not RFC1918)  Srvs&App:  Web Browsing   Act: DROP  Log: ExtLog & Acc

    > Rule 6 – Src: Test server  Dst: Internet (not RFC1918)  Srvs&App:  Any   Act: DROP  Log: Log & Acc

Rule 4 is allowing some of the traffic (as it always has been) but some is still hitting Rule 6.  When I examine the log for the rule 6 DROP, it states that the traffic was “Service: https (TCP/443)” and Description “https Traffic Dropped from x.x.x.x to y.y.y.y”.

 

As the FW is recognising that the traffic dropped by rule 6 is TCP/443, I don’t understand why rule 5 isn’t doing the Drop instead of rule 6?

0 Kudos
PhoneBoy
Admin
Admin

Any reason you can't make Rule 6 Extended Logs as well?

0 Kudos
VooDooChris
Contributor

Not that I can think of.  I'll ask for it to be changed.  That said, it's the same rule that wan't showing the failed URL earlier in the week anyway, so is it likely to show any more detail now?

The thing I need to understand is why rule 5 isn't blocking the TCP/443 traffic.

0 Kudos
Tobias_Moritz
Advisor

Just curios... Are you using HTTPS Inspection Blade (and your inspection policy doesn't say bypass for this blocked traffic) or are you just leveraging HTTPS Inspection Lite (by enabling "Categorize HTTPS websites" in URL-Filtering blade settings)?

It would be interesting if you can catch the drops (and see new details) with this new block rule to create between rule 5 and rule 6 looking like this:

> Rule x – Src: Test server Dst: Internet (not RFC1918) Srvs&App: Web_Browsing_Encrypted Act: DROP Log: ExtLog & Acc

For the object Web_Browsing_Encrypted, I would create it this way:

Type: TCP

Protocol: HTTPS

Port: Standard (443)

Advanced -> Protocol signature: Enabled

VooDooChris
Contributor

We're using the "Catagorize HTTPS websites" option.  No HTTPS inspection at the moment.

We've set up the additional rule as suggested.  Policy will get pushed after hours, so will post results after the weekend.

0 Kudos
VooDooChris
Contributor

Update: We created the rule as suggested, so now this section of our ruleset looks like this:

> Rule 4 – Src: Test server Dst: Internet (not RFC1918) Srvs&App: Custom URL list and TCP 80/443/8080 Act: ALLOW Log: ExtLog & Acc
> Rule 5 – Src: Test server Dst: Internet (not RFC1918) Srvs&App: Web Browsing Act: DROP Log: ExtLog & Acc
> Rule 6 – Src: Test server Dst: Internet (not RFC1918) Srvs&App: Web Browsing Encrypted Act: DROP Log: ExtLog & Acc
> Rule 7 – Src: Test server Dst: Internet (not RFC1918) Srvs&App: Any Act: DROP Log: ExtLog & Acc

The new rule 6 is now blocking the TCP/443 traffic that is slipping past rules 4 & 5. It’s also started showing catagorisation for the traffic i.e the Application Name Checkpoint assigns to the traffic and Primary Category too,  which is great, but sadly it’s still not showing the URL that the server attempted to access when it was blocked or a reason why the firewall blocked it.


Is it actually possible to get this information out of the FW?  Maybe it's not available in SmartConsole and I need to drop to Gaia to get this level of information?

0 Kudos
Tobias_Moritz
Advisor

Great, that you get now hits on this new rule 🙂

While I cannot answer your question how to get more details from this deny log entries in R80.30 (I hope someone other jumps in here), I want to point to you sk64521 where you can learn, that the HTTPS Inspection Lite Feature (HTTPS Inspection blade disabled but "Categorize HTTPS websites" enabled in URL-Filtering blade settings)  you are using really needs trusted certificates from the perspective of the gateway. Do you have this in mind? If not: Please take care, that your Trusted Domain List is up to date and contains your private CAs, if you are using such. I saw environments, were the Trusted Domain List was not updated automatically when no gateway ever had HTTPS inspection blade installed. But this is needed for HTTPS Inspection Lite to work properly. You also may need to restart wstld after updating the Trusted Domain List even if you did a policy install while the sk says its not needed. I had a support ticket, were this was needed.

I also want to point you to sk159872. Maybe you can get what you want only after upgrading to R80.40 (appi_urlf_ssl_certificate_validation_log_enabled).

PhoneBoy
Admin
Admin

@Meital_Natanson any thoughts on this one?

0 Kudos
VooDooChris
Contributor

I took a look at the two SK articles and the comments on trusted CA’s.  It doesn’t look like our list of trusted CA’s is being updated (and hasn’t been for a long time).  Looking at the certificate returned by the IP address in the block log entry, it looks like the root certificate of the URL is newer that the newest root cert we have in our store (so it can't possible be in there).  We’re going to look to add just that root cert to see if the traffic gets allowed.  I think we'll look to enable automatic updating in the comeing weeks too.

I’m not clear on whether we need to add the intermediate signing cert to the Checkpoint’s CA list too?  I’m thinking without the intermediate cert, if the Checkpoint is checking the whole cert chain it won’t be able to do it without the intermediate cert?  Or does it only check the root (this wouldn't make sense to me though in case of intermediate cert revocation)?

I’m still keen to find somewhere in Checkpoint that explains in more detail why traffic was blocked by the application blade, but this might allow the functionality we require in the meantime.

Thanks for everyone’s help on this so far.  Very much appreciated and I’m learning lots.

Chris

0 Kudos
VooDooChris
Contributor

"I’m thinking without the intermediate cert, if the Checkpoint is checking the whole cert chain it won’t be able to do it without the intermediate cert? " - thinking about this a bit more, I think the Checkpoint would expect the endpoint to supply the intermediate certificate along with the URL's certifcate, thus allowing the chain to be validated.  If this works as I think, then only the root cert will be needed on the Checkpoint.

0 Kudos
Tobias_Moritz
Advisor

Regarding Update of Trusted CA List: You can check $CPDIR/database/downloads/TRUSTED_CA/2.0/ on your managment servers file system. I had a case, where the updates were downloaded successfully to managment servers file system but not imported to Trusted CA List. Current revision should be 2.7 so maybe you can found this file on your host:

$CPDIR/database/downloads/TRUSTED_CA/2.0/2.7/updateFile.zip
MD5-SUM: 194ed6e321927c55e77ee9648548bb30

If this is the case, you can download it to your workstation and import it in SmartDashboard.

 

Regarding Intermediate CA Cert: You are right, this should work work this way. Every server offering a TLS endpoint should provide intermediate(s) along with its server cert and should not rely on clients knowing it.

VooDooChris
Contributor

Update: We've updated the CA list on our Checkpoint.  It' hasn't permitted the traffic from the server to flow but it's something we need to start doing anyway.  Not sure how often Checkpoint update the CA list?

I also found out that while the SG's are all R80.30, the SMS is R80.20 (not sure why, I'll be asking).  I think I remember reading you can set the CA list to automatically update in R80.30, but not in R80.20.  We've set up a process to check for update once a month now anyway.

So the CA change hasn't given us visability of the URL that the client is trying to access through the Checkpoint in SmartConsole or SmartView, but something I have found is the OpenSSL S_client command.  If you take the IP address detailed in the blocked traffic and run the OpenSSL S_client on it, if it has connectivity it returns the the certificate details:

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

C:\OpenSSL\bin>openssl s_client -connect 40.68.232.16:443
CONNECTED(00000234)
Can't use SSL_get_servername
depth=1 C = US, O = Microsoft Corporation, CN = Microsoft RSA TLS CA 01
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 CN = *.blob.core.windows.net
verify return:1
---
Certificate chain
0 s:CN = *.blob.core.windows.net
i:C = US, O = Microsoft Corporation, CN = Microsoft RSA TLS CA 01
1 s:C = US, O = Microsoft Corporation, CN = Microsoft RSA TLS CA 01
i:C = IE, O = Baltimore, OU = CyberTrust, CN = Baltimore CyberTrust Root
---
Server certificate
-----BEGIN CERTIFICATE-----
MIINtDCCC5ygAwIBAgITawADvAdmouS5OGrIsAAAAAO8BzANBgkqhkiG9w0BAQsF
ADBPMQswCQYDVQQ.......

< Truncated to aid visability > 

...........x0eVaQiIS1MWC+Jtve6K6Oz43KBXLLTwQ6z
t5zBnzoYOXc=
-----END CERTIFICATE-----

< Truncated to aid visability > 

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

 

....and if you copy the text between -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- (inclusive), copy it in to a text file and run CertUtil -v -dump on it, it will display all the returned certificate's information, even the SAN's detailed in the certificate:

 

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

X509 Certificate:
Version: 3
Serial Number: 6b0003bc0766a2e4b9386ac8b000000003bc07
0000 07 bc 03 00 00 00 b0 c8 6a 38 b9 e4 a2 66 07 bc
0010 03 00 6b
Signature Algorithm:
Algorithm ObjectId: 1.2.840.113549.1.1.11 sha256RSA
Algorithm Parameters:
05 00
Issuer:
CN=Microsoft RSA TLS CA 01
O=Microsoft Corporation
C=US

< Truncated to aid visability > 

0003: b0

2.5.29.17: Flags = 0, Length = 631
Subject Alternative Name
DNS Name=*.blob.core.windows.net
DNS Name=*.am5prdstr02a.store.core.windows.net
DNS Name=*.blob.storage.azure.net
DNS Name=*.z1.blob.storage.azure.net
DNS Name=*.z2.blob.storage.azure.net
DNS Name=*.z3.blob.storage.azure.net
DNS Name=*.z4.blob.storage.azure.net
DNS Name=*.z5.blob.storage.azure.net

< Truncated to aid visability > 

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

 

So while not ideal (ideal would be to easily see this info in one of the SMART consoles) we can now at least see a list of potential destination URL's that the client is trying to get to and is being blocked, but I still don't have visability as to why the Checkpoint has blocked the traffic.

0 Kudos
the_rock
Leader
Leader

Hi Chris,

 

Sorry I did not get to read EVERYTHING in this thread, but I can tell you what I did with one of my customers. We were having lots of challenges with TAC and them giving good suggestions on best way to actually set up url filtering rules, so after testing this for 2 months, we simply decided to set up another ordered layer with any any allow at the bottom (as thats sk recommendation, cant recall sk now, but if you search best recommendation app control, it will pop up). Seems like if you do whitelist for url layer, its problematic, as firewall spends more time on processing those packets and it cant do categorization correctly.

Now, as far as your initial inquiry, about the logs, I can tell you that easiest way I found to get what you need is simply do a search in logs for url filtering blade, it will give you everything you need, or you can even search by specific category under option "other".

 

Hope that helps.

 

Andy

0 Kudos
Tobias_Moritz
Advisor

I thought you want to see which FQDNs your clients are requesting.

The test with openssl s_client only shows, which FQDNs this host is offering. But if this also helps you, than sure, you can check it that way.

Btw: on a Linux shell (not sure with the Windows shells), you can use this one-liner to see both relevant cert attributes (Subject CN and SAN):

echo | openssl s_client -connect 40.68.232.16:443 | openssl x509 -noout -subject --ext subjectAltName

For a complete cert print-out, use this:

echo | openssl s_client -connect 40.68.232.16:443 | openssl x509 -noout -text

VooDooChris
Contributor

Thanks for the comments.  

Tobias_Moritz:  Our ultimate goal is to be able to see the FQDN that the clients are trying to access, however being able to see the cert information from the IP address we can see the the client trying to access is more than what we had before and is a little helpful in troubleshooting (e.g. confirming the application’s required URL list is valid).

Thank you for the more succinct OpenSSL command.  I suspected someone on here would know a better way to do this. 🙂

Ottawacanada150: The issue is we’re not seeing the FQDN the client is trying to access in the URL Filtering Blade logs.  We just get an IP address.

 

0 Kudos
the_rock
Leader
Leader

@Chris...sorry, but the only logical thing I can think of would be that maybe resolving is not working right. I never had that issue before, so hard to say. Do you get same thing if you try filter by url filtering blade too? Message me directly, we can do remote session, I am curious to see what might be the reason.

Andy

VooDooChris
Contributor

Hi Andy, thank you for the offer, it's really appreciated, but company policy states we're only allowed to do remote sessions with third parties we've signed an NDA with.  

We've actually engaged our resident Checkpoint support company to look to get a days professional service in to examine the problems we're having with URL filtering.  I'll write up a summary and post it once we have the conclusions of that engagement.

Thanks for everyone's help with this.  Hopefully we'll have the answer soon.

Chris

0 Kudos
the_rock
Leader
Leader

No worries, I know how it is : ). Hope you figure it out!

Andy

0 Kudos
Jeff_Williams1
Participant

While I can't answer your question about more detailed logs (as I've had the same question myself), I use Categorize HTTPS websites and not full HTTPS inspection. Because I have the same requirements with subdomains, etc. as you do, I found a way around the firewalls not always permitting traffic when you think it should, without having to do full HTTPS inspection. Use the Application Control Signature Tool (sk103051) and create a custom application yourself.  Using that, the firewall does not have to care about the certificate, you can use the SNI request from the client to match (as long as the firewall sees the SNI request in the clear, which unless it's using TLS 1.3, it should). The only thing you have to be careful of when doing it this way, is the match is a string match, so if the domain you're trying to match contains the text you entered, it will allow it. This could cause users to be able to reach a site that you did not intend. Although using SNI is mentioned as not supported or not fully supported in certain versions, I've used it on R77.x, R76SP.X, and R80.30 and it has worked well.

VooDooChris
Contributor

Thanks for this Jeff.  I'll take a look at that tool.

Chris

0 Kudos
D_W
Collaborator

That's an interesting solution but I would prefer that Check Point itself can provide the correct information about the allowed/blocked destination URI using rDNS and SNI.

0 Kudos
PhoneBoy
Admin
Admin

SNI is supported from R80.30.
In R80.30 in particular, depending on JHF level, you will need to enable HTTPS Inspection (even with just an any any bypass rule).

VooDooChris
Contributor

Thanks for the tip.  I'll add it to our deployment plan.

Chris

0 Kudos