FQDN objects have a universally low performance impact, and they work for any protocol. The downside is the domain names must be fully-qualified. They are also sensitive to the DNS resolution path (if clients use different resolvers from the firewalls, they may not agree on the IPs for a given FQDN). Edit: I had forgotten that FQDNs can also be used as a source in a rule; this isn't common, but it is possible.
URL Filtering can deal with subdomains, and it’s totally insensitive to DNS differences. It actually looks at the name as learned via HTTP calls or the TLS negotiation. The downside is this only works for HTTP-based protocols (won’t work for SFTP, for example), and it has a bigger performance impact (especially when you try to match a bunch of subdomains).
URL Filtering also has a mild security impact in that connections have to be allowed pretty broadly, since the server name is typically learned well after the SYN. Some vulnerability scanners are likely to complain about this. Explaining it over and over to regulatory compliance authorities gets old. It’s not necessarily a significant issue as long as you construct your policy well.