- Products
- Learn
- Local User Groups
- Partners
-
More
Join Us for CPX 360
23-24 February 2021
Important certificate update to CloudGuard Controller, CME,
and Azure HA Security Gateways
How to Remediate Endpoint & VPN
Issues (in versions E81.10 or earlier)
IDC Spotlight -
Uplevel The SOC
Important! R80 and R80.10
End Of Support around the corner (May 2021)
AWS requires introduction of secondary IP addresses and EIPs associated with them in order to access servers behind vSEC.
Once address is created and the EIP assigned, the traffic seem to flow through the vSEC,
My question is this:
Should the secondary IP be defined as an alias for the external interface of the vSEC?
Right.
The Elastic IP is associated with a secondary IP of the vSEC instance.
The NAT for this happens before the vSEC instance sees it.
The secondary IP of the vSEC instance only exists in AWS to get the packet to the vSEC instance (think of it like an ARP).
The vSEC instance will receive the packet (sent to secondary IP) and will process it according to the NAT rulebase.
In general, the only place an Elastic IP address will ever go is the SmartConsole object IP for either:
The topology/interface configuration doesn't reference elastic IPs at all, neither does the Gaia configuration.
The secondary IP will need to be represented in the Access Policy and NAT rulebase.
It does not need to be configured in Gaia.
You do not need an interface alias in Gaia for the secondary IPs.
The secondary IP is not really for the security gateway, but for a device behind the gateway the firewall is doing NAT for.
So, not actual secondary IP, nor the assigned EIP should figure anywhere in the Gaia configuration or topology of the vSEC object?
Right.
The Elastic IP is associated with a secondary IP of the vSEC instance.
The NAT for this happens before the vSEC instance sees it.
The secondary IP of the vSEC instance only exists in AWS to get the packet to the vSEC instance (think of it like an ARP).
The vSEC instance will receive the packet (sent to secondary IP) and will process it according to the NAT rulebase.
In general, the only place an Elastic IP address will ever go is the SmartConsole object IP for either:
The topology/interface configuration doesn't reference elastic IPs at all, neither does the Gaia configuration.
The secondary IP will need to be represented in the Access Policy and NAT rulebase.
It does not need to be configured in Gaia.
Thank you.
I'm just covering all the bases while trying to solve the mystery of disappearing Logical Server in this thread:
https://community.checkpoint.com/thread/5748-inconsistent-behavior-of-vsec-in-aws
Right, but I'm glad to document the knowledge for others that might have the same question.
A reference architecture sk article would be good, similar to the Azure one. It needs to step through where, when, why for EIP use as none of the guides, even the AWS getting started guide, tackle this subject.
I am not sure if the subject is deserving separate sk or should be incorporated in updated document describing AWS deployment scenarios.
The EIPs are essentially a Static Nat entries of AWS.
Hi Vladimir,
I have question for cluster environment. If I attach secondary IP to active member with EIP, in case of failover how it will be diverted to another member.
We offer a number of CloudFormation templates for AWS that can make the initial deployment simple: AWS CloudFormation Templates
This also links to other SKs for specific configurations (autoscaling, clustering).
I actually have a mild objection to the use of the templates from the get-go:
1. They are finicky and require certain pre-requisites not mentioned in deployment scenarios (number of available EIPs in the Region, location of the management server).
2. When stack is utilized, it may roll-back unexpectedly with limited feedback.
3. It deprives us of the ability to step on the rake a few times. The process that is conducive to downing of comprehension and formation of the long-term memory:)
Nothing wrong with eventually transitioning to coded deployment, but it pays to have to go through the manual process few times.
Just a personal preference.
Totally agree.
The SKs linked in the CloudFormation SK tell you how to do stuff manually if you so desire.
Automation is the only way to do a real deployment, thus why we have CloudFormation scripts that make this easy
Twiddling the nerd knobs on your own (or as you said, stepping on the rake) is definitely important.
This will provide a better understanding of what's going on "behind the scenes" and give you some ability to troubleshoot when things go belly-up (as they sometimes do).
I learned quite a lot about the vSEC Controller when I actually stood up the environment demonstrated here: https://community.checkpoint.com/message/7699-leveraging-the-r8010-api-to-automate-and-streamline-se...
Below are few examples of deployment in AWS that I am presently running in my lab. The dual AZ cluster with external ELB and the ASG are not too far away, provided the client(s) is/are interested:
About CheckMates
Learn Check Point
Advanced Learning
WELCOME TO THE FUTURE OF CYBER SECURITY