Hi Mates,
I have deployed the VMSS solution with custom blades and everything looks fine from management, gateway, policy perspective.
On the day of actual cutover of traffic from traditional cluster to the VMSS Lb, it failed really bad 🙂 .
The traffic we are testing is EAST - West, with no NAT needed.
On investigation, I could see the the initial traffic reaching the destination sever and response coming to my VMSS gateway... But for some reason the response / reply is not reaching the source machine.
( and I know it's not lb persistence issue, since added persistence with client up &port -> all the traffic is passing thru one gateway)
I have checked all the routing, NSG, etc --- everything is pretty much same, since we are just changing the routes to point to the new vmss lb, instead of old cluster lb ...
I can see that eth0 - in vmss instance has ipforwarding as false in Azure , also eth1 doesnot has the default NSG attached... Is this correct??
Anyone faced same issue?? Do let me know if I am missing something in the VMSS deployment.
Tx,
Abhishek