- Products
- Learn
- Local User Groups
- Partners
- More
MVP 2026: Submissions
Are Now Open!
What's New in R82.10?
Watch NowOverlap in Security Validation
Help us to understand your needs better
CheckMates Go:
Maestro Madness
Hello, Mates.
In production environments, and based on your experiences, is it better to use "Customize Application/Site", than to use "DOMAINS" objects, for what corresponds to URL "Filtering"?
I have an environment where I have active Firewall/APPC/URL blades.
And I want to allow the access of 1 server with IP 172.17.20.20 to domains such as "https://hsploit.com" and "https://quay.io".
What is the most viable option to use? Since I understand that the 2 ways, I can guarantee the access uniquely to those URLs for that server.
Is there a hardware performance consideration when using one or the other method?
Thanks for your comments.
FQDN objects have a universally low performance impact, and they work for any protocol. The downside is the domain names must be fully-qualified. They are also sensitive to the DNS resolution path (if clients use different resolvers from the firewalls, they may not agree on the IPs for a given FQDN). Edit: I had forgotten that FQDNs can also be used as a source in a rule; this isn't common, but it is possible.
URL Filtering can deal with subdomains, and it’s totally insensitive to DNS differences. It actually looks at the name as learned via HTTP calls or the TLS negotiation. The downside is this only works for HTTP-based protocols (won’t work for SFTP, for example), and it has a bigger performance impact (especially when you try to match a bunch of subdomains).
URL Filtering also has a mild security impact in that connections have to be allowed pretty broadly, since the server name is typically learned well after the SYN. Some vulnerability scanners are likely to complain about this. Explaining it over and over to regulatory compliance authorities gets old. It’s not necessarily a significant issue as long as you construct your policy well.
This is just me, but personally, I always recommend people to use domain objects if its fully qualified domain in question OR say if you want to allow multiple services, otherwise, custom site is fine, but you need to have urlf+appc enabled in the layer itself and as the blade, of course.
Andy
Buddy,
What do you mean by "Fully Qualified Domain"?
Is it important to know how to recognize such a domain? In the example of a URL like https://hsploit.com, how could you "determine or consider" that the domain is "Fully Qualified"?
I understand that this is indispensable to use DOMAINS objects.
Now, if you work with these objects, is it always "preferable" to do it by selecting the FQDN checkbox?
What leaves me doubts, is if it is "guaranteed" to use the DOMAIN, because there are occasions that some pages sometimes load and sometimes not, and I suspect that this is related to a DNS issue (something that DOMAIN works with).
This link explains it PERFECTLY.
Andy
What Is a Fully Qualified Domain Name (FQDN)? | Networksolutions.com
It's actually slightly wrong on a point which is relevant here. Ultimately, a fully-qualified domain name has an A, AAAA, or CNAME record in DNS. It doesn't matter how many domain components you have. The DNS root isn't ever fully-qualified. I'm actually not sure if a TLD like "com." could be fully-qualified (I haven't ever seen one which is, but I don't know of anything which would prohibit it from ever being set up that way). "networksolutions.com." has two A records, so it is fully-qualified.
An FQDN must be complete and exact. It can't include every subdomain under it. For example, "time.windows.com." and "windows.com." are fully-qualified, while "*.windows.com." is not.
"hsploit.com." is a fully-qualified name. Building a domain object with that FQDN will match traffic to (or from) the single IP address it currently resolves to, but won't match traffic to (or from) subdomains under it.
FQDN objects have a universally low performance impact, and they work for any protocol. The downside is the domain names must be fully-qualified. They are also sensitive to the DNS resolution path (if clients use different resolvers from the firewalls, they may not agree on the IPs for a given FQDN). Edit: I had forgotten that FQDNs can also be used as a source in a rule; this isn't common, but it is possible.
URL Filtering can deal with subdomains, and it’s totally insensitive to DNS differences. It actually looks at the name as learned via HTTP calls or the TLS negotiation. The downside is this only works for HTTP-based protocols (won’t work for SFTP, for example), and it has a bigger performance impact (especially when you try to match a bunch of subdomains).
URL Filtering also has a mild security impact in that connections have to be allowed pretty broadly, since the server name is typically learned well after the SYN. Some vulnerability scanners are likely to complain about this. Explaining it over and over to regulatory compliance authorities gets old. It’s not necessarily a significant issue as long as you construct your policy well.
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
| User | Count |
|---|---|
| 19 | |
| 17 | |
| 14 | |
| 8 | |
| 7 | |
| 3 | |
| 3 | |
| 3 | |
| 3 | |
| 2 |
Tue 16 Dec 2025 @ 05:00 PM (CET)
Under the Hood: CloudGuard Network Security for Oracle Cloud - Config and Autoscaling!Thu 18 Dec 2025 @ 10:00 AM (CET)
Cloud Architect Series - Building a Hybrid Mesh Security Strategy across cloudsTue 16 Dec 2025 @ 05:00 PM (CET)
Under the Hood: CloudGuard Network Security for Oracle Cloud - Config and Autoscaling!Thu 18 Dec 2025 @ 10:00 AM (CET)
Cloud Architect Series - Building a Hybrid Mesh Security Strategy across cloudsAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY