- For load, it probably doesn't matter too much. Your environment sounds similar to mine, and I prefer to apply URL filtering and so on at the outermost perimeter because it gives me one consistent place to check when there are certain classes of problem. Human time is a vastly more expensive resource than processor time.
- FQDN objects present negligible load in general. I'm not sure about updatable objects.
- I personally think zones are a fantastic way to shoot yourself in the foot. I prefer to do everything by IP (or FQDN, which is ultimately by IP), so no matter how it arrives at the firewall, it gets the same treatment.
In my environment, I have a core transit per datacenter with a bunch of firewalls hanging off of it. There are interior firewalls which own networks servers live on, then there are transit firewalls which sit between the core transit and other things (for example, one transit firewall per Internet connection, one per WAN link category [to my other datacenters, to customers, to vendors, etc.], etc.). This allows the rules on any given interior firewall to be written for arbitrary clients to reach the services provided by that application. The transit firewalls then have all the rules relevant to their connection. I find this really simplifies plotting out the A-to-B path between endpoints, which simplifies making changes and troubleshooting when things break.
The doctrine of blocking things as close to the source as possible only really matters in extremely resource-constrained environments. Computers are fast, and networks are no longer as enormously oversubscribed. With the exception of rare edge cases like a firewall on the ISS, you can afford to block stuff where it makes your life easiest as opposed to where it makes the computer's life easiest.