Thanks Shay, I have noticed recently Azure must have done something which causing issued in different places. i,e.
Azure Cloudguard HA deployment
it used to work when you have create a http rule on frontend LB with floating IP enable and create the NAT and access rules on firewall, all backend static route, peering etc all created but you can access backend web server from LB public IP:
Access RULE: source ANY -- Dest: LB Public -- Service: http
NAT rule: source ANY -- Dest: LB Public -- OrgService: http --Translate Dest: Webserver internal IP
however if I create a NAT rule like below it works
NAT rule: source home Public IP 149.10.x.x -- Dest: LB Public -- OrgService: http --Translate source <Active FW IP>--Translate Dest: Webserver internal IP
when run tcpdump I can see the traffic arrive on Eth0 and correctly leave internal interface Eth1 but I don't see traffic arriving on internal web interface, it just lost somewhere seems its a routing issue with Azure.
I have also tested to ping and telnet from Firewall member A and B to internal webserver and I can ping and telnet on port 80 so its clearly not an issue with configuration but the Azure internal architectural issue.
this also deployed on UKWEST
I have opened a case with TAC and they said its a Azure routing issue open a ticket with them.
maybe you can try the same,
Regards