The best you will be able to do is run orch_stat and look at the "MHO Sync LAGs Status" section of the output, but that only reflects whether the two MHOs can see each other over the sync interface for purposes of keeping their configs sync'ed with each other when a config change is made on peer MHO. That's it.
There is no equivalent of cphaprob state available for the MHOs because they are not really clustering with each other like ClusterXL gateways do. The "failover" of the traffic is not directly coordinated by the Orchestrators themselves, but by the bonding protocols such as LACP in use by the MHOs and surrounding network components.
If one of the MHOs completely fails or an uplink/downlink interface is de-provisioned from a Security Group, link integrity (green light) is immediately dropped on that interface and the uplink device sees that interface of the bond has failed. All traffic is sent to the surviving interface of the bond which leads to the MHO that is still working. The same process happens with the downlink interfaces to the security gateways when an MHO or interface fails. When an MHO boots up, link integrity is not restored on the uplinks/downlinks until the MHO has fully initialized, all Security Groups have been provisioned, and are ready to handle traffic.
The MHOs are not even directly coordinating who will take which traffic, tracking connection state, performing IP routing, or even trying to keep the overall load balanced between them. Once again the bonding protocol's distribution algorithm (hopefully 802.3ad with Layer 3+4) on the MHOs and devices connected via uplink/downlinks are handling that.
Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com