- CheckMates
- :
- Products
- :
- Quantum
- :
- Management
- :
- Re: R80.20 - LACP Interface to VPC
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Are you a member of CheckMates?
×- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
R80.20 - LACP Interface to VPC
Hello together,
I am planning a new installation of two 6500 Appliances in a ClusterXL deployment. The Appliances will each be connected to a VPC-Domain consisting of two Nexus 9K.
I would like to create a BOND / LACP Interface on each Appliance, where NIC 1 is connected to VPC Member 1 and NIC 2 is connected to VPC Member 2. The goal of this approach would be to increase bandwith and to increase resilience (I would like to update one VPC Member without failing over the firewall cluster).
My question: Shall I configure the bond interface as HA or can I use Load Sharing? According to the R80.20 Admin Guide and ClusterXL Guide, both are valid configurations. The thing I don't understand is that one supports "switch redundancy" and the other not:
- High Availability (Active/Backup): Gives redundancy when there is an interface or a link failure. This strategy also supports switch redundancy. Bond High Availability works in Active/Backup mode - interface Active/Standby mode. When an Active slave interface is down, the connection automatically fails over to the primary slave interface. If the primary slave interface is not available, the connection fails over to a different slave interface.
- Load Sharing (Active/Active): All slave interfaces in the UP state are used simultaneously. Traffic is distributed among the slave interfaces to maximize throughput. Bond Load Sharing does not support switch redundancy
Unfortunately I can't find any further explenation about this. What is meant with 'switch redundancy' in this context? Logically the VPC-Domain acts as a single Switch anyway....
Thanks for your help and many greetings from Germany.
Thomas
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Thomas,
what you need is the Load-Sharing Active/Active setup. The checkpoint Load-Sharing works perfectly with a Cisco vPC.
The "switch redundancy" part is indeed a bit confusing. I think what they mean is that you can not connect a bond in HA mode to 2 different (separate) switches, thus 'not' making it redundant.
The explanation seems to exclude switch stacks or vPC setups.
If you configure your vPC on the Cisco and Checkpoint side, just make sure you use the same hashing algorithm (preferably Layer3+4) and the same LACP rate (preferably fast rate).
if you are working with Nexus, also double check the used frame size.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear team ,
We have two checkpoint(R80.10) In Active - Passive (HA ) Setup .
Destination side Nexus 9K (with latest version firmware), so we have configured Bond Checkpoint side and nexus side ether channel with VPC configuration .
Checkpoint Side 2 10G interface and we have created Bond and after that In this interface there are multiple vlan subinterface we have created .
we have tried to setup but cluster showing down becuase of communication is not happening to pri checkpoint to sec checkpoint by that specific sub interface .
Can you please help us that vpc should work or any limitation is there ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Vincent,
thanks a lot for your detailled and spot on answer, this was exactly the information I needed 😉
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
question from my side though:
- are you planning VSX or stand-alone Manged deployment?
- do you need LACP or it can really be just a "bond"-ing approach? I bet Cisco will figure both just fine
- how do you levarage the traffic flow throughout the gateways? what's your plan? is the redundancy the only aspect you've been thinking about really?
once you answer those I could share some of my experiences with not-really-any-longer favorite LSM mode ...
Jerry
ps. search our community and see why LSM on R80.20 is no longer a best-possible-option for most of the deployments.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jerry,
I don't plan on using LSM/Smart Provisioning. I will only have one virtualiued Mgmt-Server, a Cluster of 6500 (no Maestro) and maybe a few CloudGuaed IaaS in the future.
LACP/Bonding is only there to increase bandwith and increase resiliance.
Best regards,
Thomas
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
excellent, so now you need to read this thread mate
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for the great link!
I think we are speaking about different things here though. AFAIK: ClusterXL Load Share != LACP Interface Load Share.
Or have I missunderstood the different technologies/terms here? I don't want to do ClusterXL Load-Sharing ( I consider this a bad idea in regards to Complexity <> Performance), just LACP-Interface-Active-Active.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
nop. we don't. see your 1st line: "I am planning a new installation of two 6500 Appliances in a ClusterXL deployment".
I was just referring to the LSM A/A and your approach to the aggregation. Think about it.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
A gotcha! Ok, thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Thomas,
Could you please tell me what was the solution to your configuration? We are having exactly the same setup with VSX and experiencing multiple issues.
If I understood correctly - you can't have bond with active-active setup configured on cluster in HA mode?
Thanks,
Ivan
