Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Greg_Harewood
Contributor

Does ElasticXL really require 4 interfaces?

It's all in the subject line.  The new R82 training material states min 4 interfaces, mgmt, sync, inside and outside.  Is that really enforced?  Can we do a single interface on-a-stick deployment?

It's very rare to care.  But I hate teaching something without knowing the real answer.  Does it enforce an expected 4 interfaces?  What product feature requires that?

Thanks!

Greg

 

0 Kudos
12 Replies
Greg_Harewood
Contributor

Clarification - single traffic interface.  So 3 in total, mgmt, sync, and traffic.

0 Kudos
Chris_Atkinson
MVP Platinum CHKP MVP Platinum CHKP
MVP Platinum CHKP

Single physical / bond carrying multiple VLANs or something different?

Certainly some of the Threat prevention blades have considerations for an interface being defined as external.

CCSM R77/R80/ELITE
0 Kudos
Greg_Harewood
Contributor

No, not a vlan trunk.  Single interface.  That usually happens in two cases:

1. When it's not really passing through traffic, such as it's a firewall for identity sharing, or proxy (rare now), or SMTP relay

2. In a cloud or similar environment where you don't need to route through to get traffic to pass.  An example would be Cisco ACI where you can use the fabric to bounce traffic off a single interface firewall... each access control entry can say accept, drop, or send to firewall.

0 Kudos
Bob_Zimmerman
MVP Gold
MVP Gold

On non-VSNext firewalls, you can technically get by with just two interfaces. The physical interface named Mgmt goes into the bond magg1, and the physical interface named Sync gets renamed to eth1-Sync and added to bond 1024, which is then named Sync. Sync is a bit special, but magg1 is a normal bond with a weird name.

On VSNext firewalls, VS 500 is created automatically, it's named "mgmt-switch", magg1 is assigned to it, and a warp is created leading from VS 0 to mgmt-switch. You can't add a subinterface to a warp, so you must use at least three interfaces there.

0 Kudos
Greg_Harewood
Contributor

I'm going to throw in a second question, because it's also for ElasticXL, R82, and for getting up to speed to teach it.

What's going on in the attached lab book snippet?

My understanding is that then you ssh to a firewall running SP, you end up at the SMO-master, the lowest numbered.  You can move around with m, but there is not difference in the nature of the session. And the gclish prompt tells you exactly the environment you are in - in this case, s01-01, in clish global mode.  But the instructions imply that even though you've started on 1, the m 1 command gives you a different environment.  It asks you to m 1, add the license, and then exit.  I would understand this if the outer environment gave a whole cluster view but I'm not seeing any evidence of that.  Global mode defines this and is on in inner and outer.

What are they trying to tell us here?

0 Kudos
emmap
MVP Gold CHKP MVP Gold CHKP
MVP Gold CHKP

The person who wrote that perhaps intended to mean that if you have a license for each SGM with its sync IP, you would want to locally apply it to the right SGM using the 'member' command for every SGM other than the SMO - but perhaps only had 1 gateway in their lab environment to take the screenshot. Either way, no there's no special shell entered there, they just SSH'd back into the same SGM they were on already. It's not something you would ever actually need to do.

(1)
yukaia
Contributor

You cannot, the sync interface needs to be a dedicated interface, separate from your data uplinks. I have three bonds on my ElasticXL clusters, sync, magg, and data. The sync bond was a bit tricky to figure out, you have to have the dedicated sync interfaces initially cabled up and on the sync vlan in order to build an ElasticXL cluster, it's looking for LLDP on the sync port to discover the other gateways. After the initial cluster setup, I went and added a pair of sfp+ ports per SGM to the sync bond via gclish and removed the cabling for the dedicated 1gbe sync interfaces.

 

Edit: I can't remember the specifics as to why you want a dedicated magg interface, but I know it's mostly due to the fact that the management interface is only ever brought up on the SMO.

0 Kudos
(1)
emmap
MVP Gold CHKP MVP Gold CHKP
MVP Gold CHKP

The management interface is up on every SGM, it's created as part of the setup but you can use it as the internal interface if you want to. 

0 Kudos
yukaia
Contributor

I figured it was the same behavior for the SMO management interface as it is on Maestro, the management address is bound to the SMO.

0 Kudos
emmap
MVP Gold CHKP MVP Gold CHKP
MVP Gold CHKP

Nope, in EXL all inbound packets land on SMO and then half/two-thirds of them are pivotted over to the other active cluster member(s). This is the case on all interfaces, they all work the same way. 

In Maestro the management interface is still up on all SGMs, as they will use it for outbound connections (eg, logs to the log server) but the inbound reply packets will also hit SMO before being pivotted over to the necessary SGM. 

Greg_Harewood
Contributor

Mmm but if I did asg stat -i tasks, would the SMO master also necessarily be the pivot?  I suspect the magg separation as for Maestro remains based on the fact that they want an interface with failover rather than load sharing.  You can imagine that without the MHO in front, they could have collapsed back to fewer interfaces, but it's based on SP code and you know... not important enough to change.

0 Kudos
emmap
MVP Gold CHKP MVP Gold CHKP
MVP Gold CHKP

SMO is the pivot, you can see if you do 'cphaprob stat' that it'll be ACTIVE (P) while the others are ACTIVE. 

Bond mode is local to the SGM, they are not multi-chassis bonds. So bond mode and clustering are not related.

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events