Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Quentin_Antrim
Participant

CheckPoint 6500 channel-group to Cisco Nexus 9Ks VPC


I have a pair of CheckPoint 6500 appliances, GAIA R80.10. Each appliance is connected to a pair of Cisco Nexus 9k switches using a VPC port-channel. Thus I have a channel-group on each firewall consisting of two slave interfaces with the IP address on the bond interface. I am using 802.3ad (LACP) and jumbo frames (9216).

Essentially, the bond interface is DOWN on the appliances, but UP on the Cisco N9ks.

Cisco:

chw_srvrm_dcswt1# sh vp brief
Legend:
(*) - local vPC is down, forwarding via vPC peer-link

vPC domain id : 1
Peer status : peer adjacency formed ok
vPC keep-alive status : peer is alive
Configuration consistency status : success
Per-vlan consistency status : success
Type-2 consistency status : success
vPC role : primary
Number of vPCs configured : 4
Peer Gateway : Enabled
Dual-active excluded VLANs : -
Graceful Consistency Check : Enabled
Auto-recovery status : Disabled
Delay-restore status : Timer is off.(timeout = 30s)
Delay-restore SVI status : Timer is off.(timeout = 10s)
Operational Layer3 Peer-router : Enabled

vPC status
----------------------------------------------------------------------------
Id Port Status Consistency Reason Active vlans
-- ------------ ------ ----------- ------ ---------------
41 Po41 up success success 503


chw_srvrm_dcswt1# sh port-channel summ
Flags: D - Down P - Up in port-channel (members)
S - Switched R - Routed
U - Up (port-channel)
--------------------------------------------------------------------------------
Group Port- Type Protocol Member Ports
Channel
--------------------------------------------------------------------------------
41 Po41(SU) Eth LACP Eth1/17(P)

chw_srvrm_dcswt1# sh run int e1/17

interface Ethernet1/17
description OOB FW
switchport access vlan 503
mtu 9216
channel-group 41 mode active

chw_srvrm_dcswt1# sh run int port-channel 41

interface port-channel41
description OOB Port-Channel 41
switchport access vlan 503
mtu 9216
vpc 41

This is the same on both Cisco Nexus 9k switches.

CheckPoint

[Expert@chw_pbx_bbfw1:0]# cat /proc/net/bonding/bond41
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer3+4 (1)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200

802.3ad info
LACP rate: slow
Active Aggregator Info:
Aggregator ID: 2
Number of ports: 2
Actor Key: 17
Partner Key: 32809
Partner Mac Address: 00:23:04:ee:be:01

Slave Interface: eth1-01
MII Status: up
Link Failure Count: 1
Permanent HW addr: 00:1c:7f:67:2e:5c
Aggregator ID: 2

Slave Interface: eth1-02
MII Status: up
Link Failure Count: 5
Permanent HW addr: 00:1c:7f:67:2e:5d
Aggregator ID: 2

[Expert@chw_pbx_bbfw1:0]# cphaconf show_bond bond41

Bond name: bond41
Bond mode: Load Sharing
Bond status: DOWN
Balancing mode: 802.3ad Layer3+4 Load Balancing
Configured slave interfaces: 2
In use slave interfaces: 2
Required slave interfaces: 1

Slave name | Status | Link
----------------+-----------------+-------
eth1-01 | Active | Yes
eth1-02 | Active | Yes

As you can see, the Cisco port-channel is UP,  but the CheckPoint bond interface is DOWN.  I have tried both the L2 and L3+4 transmit hash policy setting, MTU 1500, and both LACP rate of fast and slow with no difference. 

This should really be very simple and there should be no reason why this bond interface should be down, but I'm looking for any suggestions on what the problem could be and what I'm missing.

By the way, I do have a ticket open with CheckPoint tech support, but with no solution so far.

Thanks.

Quentin

 

 

 

 

5 Replies
Peter_Lyndley
Advisor
Advisor

I'd be curious if you ever got a reply to this. We are seeing something similar in another environment
Matt_Killeen
Contributor

We had a similar issue after upgrading CP 15600 two member cluster running r77.30 to r80.20

In our case the bond interface was flapping due to CCP packets not being received on the bond interfaces.

The issue could be resolved by changing the cluster from multicast mode to unicast mode.

 

This was not an option for our three member clusters where multicast was required so, with no resolution to the issue, we elected to break the bond and use a single interface.

 

Daniel_Taney
Advisor

Did you see this sk? Seems like there may be a known Cisco bug.

It seems the workaround is to change the cluster mode to broadcast. 

R80 CCSA / CCSE
0 Kudos
Matt_Killeen
Contributor

@Daniel_Taney - yes, thanks but seen that one. Unfortunately, we're running Cisco OTV across geographical sites so broadcast mode can't be implemented in our infrastructure.
0 Kudos
Martin_Raska
Advisor
Advisor

Try to upgrade to R80.20 and set cluster mode to unicast
0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events