Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Aidan_Luby
Collaborator

1Gb Interface Bonding or 10Gb?

Hello there,

 

I've been trying to do some research about this on these forums as well as just on general networking forums and I'm having a hard time finding any good answers. 

 

We're looking into replacing our aging 4600 cluster with something like a 6500/6600 and partly it's due to the 4600 being inadequate for our needs now but also because of the stability issues we've had. We've had many cases due to inexplicable failures that results in the entire cluster going down and fail-over not saving us. It seems to be due to either the 1Gb ethernet interfaces being overloaded or the CPU not handling the traffic from the NICs fast enough causing many tx-time outs, hardware unit hangs and reset interfaces.

 

One of the problems is currently we have a mishmash of interfaces where a single interface might be a regular network and others where we have multiple VLANs setup. Our WAN interface for example has 4 VLANs and fails often, our sync interface is direct between firewalls and fails often, and one of our LAN interfaces has no VLANs but also fails often.

 

I was hoping to avoid these issues on the new firewalls (even if the new hardware is all we need) by either bonding all of the internal interfaces, bonding several external interfaces, and then hopefully even bonding the sync interface - OR using 2x10Gb interfaces per firewall and still use dedicated regular or bonded interfaces for sync. I can't seem to confirm the best way to do this. 

 

For example, on a 6500 I'd probably look at one of these scenarios per unit in the cluster:

  1. 4x1Gb Ethernet Bond for Internal VLANs/4x1Gb Ethernet Bond for External VLANs/Single 1Gb Sync interface (Since sync seems to be a specific port on these units. Maybe I could also steal the Mgmt port to bond sync)
  2. 1x10Gb Fibre for Internal VLANs/1x10Gb Fibre for External VLANs/2x1Gb Ethernet Bond for Sync
  3. 2x10Gb Fibre Bond with both Internal and External VLANs/2x1Gb Ethernet Bond for Sync

 

Ideally 4 10Gb ports per firewall would be nice for separation of internal/external VLANs as well as redundancy but then we'd require new Fibre modules on our Core Switches and it frankly seems overkill. I'm leaning towards option 2 just so that I don't have to deal with any potential issues that come with bonding protocols or CheckPoints compatibility with them, but it also gives a lot of bandwidth. The only thing I've thought with the 10Gb options is how much buffer do 10Gb interfaces get compared to multiple 1Gb ethernet interfaces? I just want to know of any pitfalls with the 10Gb approach over 1Gb other than the obvious cost of buying expansion modules, dealing with SFP+ modules and Fibre patch cables.

 

I know a lot of this might not matter with the appliances we're looking at I was just hoping someone could help confirm my rationale and let me know of any details I might need to consider.

0 Kudos
5 Replies
Daniel_Taney
Advisor

I think of those 3 options, I would personally do option 2. The only downside of that option is that if you have a single link failure on either 10GB fibre connection, your cluster would fail over. 

Obviously, this issue is potentially resolved by option #3, but then you end up combining your Internal and External traffic on the same physical link. As a "best practice" some people would be fundamentally against the idea of mixing this traffic. But, some of that may depend on your own philosophies or security policies. 

Since there is never "true" load balancing across bonded Interfaces, you could still run the risk of maxing out one of the 1GB links while others remain underutilized. Since that sounds like it could already be a problem for you, I think it would be best to move away from that design and get "true" 10GB throughput without relying on 1GB bonded Interfaces.

As far as the Sync and Mgmt interfaces on the appliances go; you can use them for whatever you want. They don't have to carry Management or Sync traffic. That is mostly just how they are labeled in the GAIA OS. So, yes, you could bond the Mgmt and Sync together for Sync traffic. 

R80 CCSA / CCSE
Kaspars_Zibarts
Employee Employee
Employee

I would go with option 2 with only single 1Gbps interface for Sync - I've looked at our firewalls that run approx 10Gbps throughput, sync runs well below 100Mbps on those.

As for bond as a redundancy - if you run your cluster as HA, then single 10Gbps should suffice as interface failure will simply cause cluster failover without any dramas.

 

Julian_Sanchez
Collaborator

Hello, 

Sorry I have a question. How do you know or how can you know the utilizacion over sync interface in VSX? 

Any command? I try to use SmartMonitor but I dont found. 

Regards, 

Julian S. 

0 Kudos
Kaspars_Zibarts
Employee Employee
Employee

we use external SNMP poller and graphs but you can get an idea from CPVIEW, go Network > Interfaces > Traffic

 

image.png

 

0 Kudos
Jerry
Mentor
Mentor

1. you don’t need SYNC interface on anything other than 1G. Unless SYNC goes other way around via L2 towards different DC via trunk/vxlan?
2. Bonds should be either L2/L3 then you have a choice of the protocoling/hashing - matter of design though also what’s the performance impact on the SG’s (HA SG?)
3. 10GB interfaces is a different latency indeed but with bonds all depends on your topology and how all things network wise are made together - diff. to speculate seeing no LLD here 🙂
4. Bond for Sync isn’t supported afaik ? 😛
5. Bonding is redund. and resiliency but ... trunking seem essential here, isn’t that the case though?

Cheers!
Jerry
0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events