From an availability design perspective, dual-site is a really, really, really bad idea unless literally every other team in your organization has perfect availability design. I would not use it for a new deployment, and I would move any existing deployment away from it as quickly as possible. The reason relates to why option 2 is also a bad idea: failure domains.
For a long time, I had a datacenter with a single VSX cluster as its core. It was built in 2011 when R67 was new, so that's what it was built with. That cluster handled all inter-network traffic both within the datacenter and between the datacenter and any Internet or WAN links, so the failure domain was the whole datacenter. Risky changes needed the approval of every team which had something there. The various teams couldn't agree on a window, so as a result, I just never got to do any maintenance on that cluster. It was running R67 until 2022 when the datacenter was shut down.
Now imagine both of your datacenters depending on a single cluster of devices which have to be upgraded together. There's no guarantee of isolation: a problem in one potentially affects the other, too. With a separate orchestrator cluster at each site, most potential failures are limited to that site. You can upgrade only one site to R81.20 and wait a month to see if you run into new issues.
Separately, dual-site deployments imply VLANs spanned between the two sites. This is also a bad idea for new deployments, but for a different reason: it leads to performance pathologies which can be extremely hard to debug. A lot of software expects sub-millisecond latency talking to other things on the same network block.