Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
rjpereira
Contributor

Multi-homed EC2: How to force topology for auto-provisioning ?

Hi.

I'm using R80.40 AMI, and have my gateway with eth0 bound to a private subnet, and eth1 bound to public subnet, to which I associate an public EIP.

The default setup of this in AWS, would create a default gateway for the instance with the default gateway of the private subnet which is not the desired setup, so during cloud-init phase I also do:

 

set static-route default nexthop gateway address 10.0.0.129 off
set static-route default nexthop gateway address 10.0.0.1 on

 

to switch the default gateway to eth1's default gateway.

As far as I understand  CME autoprovisioning uses the private or public address of eth0 _always_, and I want SIC ports to be open on the private subnet, therefore my choice to deploy eth0 on the private subnet, and eth1 in the public subnet.

With this deployment I can access ssh and https normally from internet.

Managing server detects the new machine, starts to autoprovision it in private address but fails because the topology of the two interfaces is set to External (which I also see in SmartConsole). If I switch it in SmartConsole to Internal, everything starts to work automatically.

My question: how can I either

a) Do something so that Gateway doesn't see eth0 as External in my scenario ?

or

b) Issue a CLI/bash/.. command in cloud-init to change topology of eth0 to internal ?

Without this auto-provisioning doesn't work..

Thanks

 

 

 

 

 

13 Replies
G_W_Albrecht
Legend Legend
Legend

I would rather contact TAC for a solution...

CCSP - CCSE / CCTE / CTPS / CCME / CCSM Elite / SMB Specialist
0 Kudos
rjpereira
Contributor

I tried.... but not sure they are even understanding the question or putting effort on it 😞

PhoneBoy
Admin
Admin

If I'm reading the various functions in $FWDIR/fw1/scripts/autoprovision/monitor.py correctly (on the management), it looks like when we're creating a gateway object for a new gateway, we assume the first interface (eth0) always has a topology of external.
You could potentially edit this script so it doesn't do this, which you'd have to do in a few places.
I suspect this script will get overridden whenever the CloudGuard Controller is updated, so I don't necessarily recommend this approach.
Not 100% sure it would work, either.

You're likely better off building things in a way where eth0 is always the external interface.
0 Kudos
rjpereira
Contributor

Thanks for your analysis.

Just to confirm I understood correctly:

a) The topology "forces" eth0 to be external (from your explanation)

b) The auto-provisioning from management server will always open the SIC port of found gateways on eth0 (from my experience)

This basically means that the only way to have auto-provisioning working is to make sure Management to Gateway traffic flows in the public subnets... Not the most desirable scenario... 😞

Is there any way that I can issue API calls/commands anything to switch via scripting the topology from EXTERNAL to INTERNAL as I can do via the SmartConsole UI ?

Thanks

PhoneBoy
Admin
Admin

You can edit gateway objects using set simple-gateway (either API or CLI).
https://sc1.checkpoint.com/documents/latest/APIs/index.html#cli/set-simple-gateway~v1.6
rjpereira
Contributor

Thanks for your continued help.

Just by a chance just came across in the last couple of hours with the single site that google indexes (https://github.com/CheckPointSW/sddc/blob/master/README.md) that describes:

 

"Optionally, in AWS and in Azure, network interface objects (ENIs/networkInterfaces) can have the following tags (in GCP these tags are specified as part of the instance tags, as x-chkp-TAGNAME-eth0--TAGVALUE):

 

x-chkp-topology.    one of "external", "internal" or "specific"

 

Although it referred to SDDC and there is no mention of it in CME or anywhere else, I tried it out and just works ! Pretty sure your solution would work too, but this seems more explicit.

 

Once again, thanks for the help.

 

0 Kudos
PhoneBoy
Admin
Admin

This seems to be a far better solution than editing the object after the fact.
0 Kudos
rjpereira
Contributor

After a few more struggles with CME, I leave here my experience of auto-provisioning AWS. It is sad that this is so poorly documented...

So on top of the discussion already had here:

 

a) eth0 needs to be the public interface. config-vpn.sh script is hard-coded with eth0 to find the public address bound to it. It could really do a better job relying on topology EXTERNAL of the interface, but it doesn't.

function show {	
   local pub="$(run 'show configuration interface' | \		
      awk '/add interface eth0 alias/{print $5}' | cut -d / -f 1)"	
   local as="$(run 'show configuration as' | cut -d ' ' -f 3)"	
   if echo "$as" | grep -qs \\. ; then		
     as="$(expr $(echo "$as" | cut -d . -f 1) \* 65536 + \			 
     $(echo "$as" | cut -d . -f 2))"	
   fi	
   echo "$pub@$as"
}

 

Unfortunately the error  is not signalled, so if you see errors with "@<asn>" on your logs, was because the function above failed to get the alias on eth0 and simply passed on “@<asn>" which you'll see also on __vpn__@<asn> tags on the gateway.

b) Even if the topology says that eth1 is the internal and your ip-address tag on gateway  says private, again SIC provisioning will try always eth0 address. If you really want management<->gateway traffic to go through in the private subnet (where eth1 needs to be), ip-address on the gateway tag can, in addition to the documentation private and public, also be set with the IP of the gateway you want your provisioning traffic to flow. If you set this to the private IP address of eth1 this will do the trick. Again could only find a reference to this in the SDDC source code referred in my previous answer.

 

I like the auto-provisioning functionality and is sad to be as poorly documented, or even developed. If this was on public domain I guess many other contributions could sync VPCs or host to network objects also....

 

PhoneBoy
Admin
Admin

The logic behind always assuming eth0 is the external interface is pretty simple:

1. One interface always needs to be marked as External for the gateway to work properly.
2. eth0 is the only interface guaranteed to be there.

I agree all of this should probably be better documented.
0 Kudos
rjpereira
Contributor

Yet a bit more of undocumented features for future visitors of this article:

If you use the ENI tag with "internal", it will be defined with a topology-settings of "network defined by address and mask".

If your managing server is in the same net everything is good and fine, but if it is in a different subnet, as most probably can happen in a cloud environment with different AZs, the initial policy deployment will block the communication with managing server beyond recovery as an auto-provision goes.

Luckily, as per SDDC, the tag can also be defined as specific:<network-group>, where you can define network-group beforehand as anything that covers your management servers.

On your deployment of the Management platform you can include for example

mgmt_cli -r trueadd network name Intranet subnet 10.0.0.0 mask-length 8

and then you can include tag specific:Intranet to not block your off subnet management servers....

0 Kudos
PhoneBoy
Admin
Admin

Part of why the community exists is to raise and, at least unofficially, document issues like this.
I did forward it to the relevant folks in R&D whom I expect will make the necessary documentation improvements.
0 Kudos
Daniel_Goldenst
Employee Alumnus
Employee Alumnus

Hi,

I'm sorry for the confusion, but the root of the problem is because the NIC interface were configured not by default.

We assume that eth0 will be defined and used as the external interface and eth1 as the internal one, our assumption is based on a common practice across all networks.

All across our documentation we always refer to this basic NIC configuration.

 

Regards,

Daniel

0 Kudos
rjpereira
Contributor

Thanks Daniel. Understood.

I think that the point that could be better documented is that eth0 is also the interface that is assumed to have the traffic between the management server and gateway, __unless__  the IP is explicit on the ip-address tag on the gateway, which I could only find documented on the old SDDC github repository.

eth0 expected to be on the public subnet, and management traffic expected to be on eth0 results in management traffic being exchanged on the public subnet which is not desired.

I believe better documenting the use of the ip-address tag would help to make it clear.

 

Cheers

 

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.