Hello guys,
I am testing AWS solution at the moment. I do have AWS Check Point cluster with both members reachable via Internet. I just need to establish SIC from MDS (CMA) which is installed as VM on my laptop (LAB).
What I did is that I have build-up new MDS which has access to the internet, but only via a Leading interface (eth0). MDS was able to automatically upgrade Deployment Agent to the latest version and also I installed the latest available Take 50 via the internet.
I have created another interface (eth1) which is supposed to be used only for private communication between my PC and CMAs:
eth0 = assigned via DHCP - 192.168.1.6/24 (can reach internet)
eth1 = 192.168.135.83 (subnet 192.168.135.0/24)
So I simply created new CMA with IP 192.168.135.84.
What I want to achieve is that from AWS CMA with IP 192.168.135.84 SIC traffic will go via Internet IP 192.168.1.6 and not via CMA IP 192.168.135.84.
I tested AWS Data Center Server and I was able to connect from AWS CMA to the AWS via internet:
But in case I want to establish SIC, traffic is going from CMA IP, not via MDS IP.
Ping from MDS level (192.168.1.6) towards AWS member is working.
Ping from CMA (192.168.135.84) towards AWS member is NOT working.
Of course, I have tried to add a static route for this AWS host via 192.168.1.6, didn't help.
Here is output from ifconfig and routing table:
I am wondering why CMA IP was assigned to eth0 (eth0:1) as it should be assigned to eth1 (eth1:1) ... Maybe due to the fact that Leading IP is set to eth0 ?
There is a similar article about this situation:
Multi-Domain Management IP address is used to connect to LDAP instead of relevant Domain IP
Is there any way how to force SIC to be established via MDS IP and not via Domain IP (CMA) ?
I can imagine that during the creation of CMA, there will be an option which interface I would like to use for ALL communication originating from CMA.
Kind regards,
Jozko Mrkvicka