Who rated this post

cancel
Showing results for 
Search instead for 
Did you mean: 
Blason_R
Leader
Leader

Finally the issue is resolved and really appreciate the help from @Andy_P for his extensive help. Much thankful to him.

The issue was - in my cloud whenever I was spinning vms it used to get two nics one with Public IP and another with my VPC.

Now when I start docker it used to get start on eth0 since it has a default gateway pointed to internet; even I started running on eth1 but docker was not coming up and it always used to show tunnel was not up.

However it used to come up properly with eth0; now since tunnel is established with Public IP it was not able to route the traffic for internal IP. Even I setup ip_forward, added iptables with masquerade but in vain.

Finally here is what I did -

Lets assume I have spin two vms VMA (Connector VM) and VMB (BIND/Named VM) with IP schema

VMA 
eth0 : 1.2.3.4
eth1: 10.10.144.4

VMB
eth0 : 5.6.7.8
eth1: 10.10.144.2

So I first deleted default route from VMA which is connector VM (of course this is done by connecting to internal IP from VMB)

Added default route through netplan and pointed to VMB {0.0.0.0/0 NH 10.10.144.2}

Enabled ip_forward on VMB

Added iptables masquerade on VM

iptables -t NAT -A POSTROUTING -s 10.10.144.0/24 -o eth0 -j MASQUERADE

That was my connector vm started routing through eth1 and to internet through VMB. Then I deleted the docker image and re-run on eth1 on VMB.

That finally worked with this. Again there is no need to deploy rule in network Access policy as I did not want any one to connect my DNS Server on port 22.

My Corporate DNS server was set to 10.10.144.2

 

Thanks and Regards,
Blason R
CCSA,CCSE,CCCS

View solution in original post

0 Kudos
(1)
Who rated this post