AnsweredAssumed Answered

No LACP on 2x10G bond with Cisco 3850

Question asked by Vladimir Yakovlev Champion on Jan 21, 2018

Having issue with R80.10 GW (ClusterXL members), on 13500 units (non-VSX):


LACP issue with Cisco 3850. Bond consisting of two 10G interfaces (same card on 13500s), show  Load Sharing being "Down" (bond1):


[Expert@CICNYCP1:0]# cphaconf show_bond -a

|Slaves |Slaves |Slaves
Bond name |Mode                |State       |configured    |in use |required
bond1          | Load Sharing |  DOWN   | 2                  | 2         | 1
bond2          | Load Sharing |      UP     | 2                  | 2          | 1


[Expert@CICNYCP1:0]# cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200

802.3ad info
LACP rate: slow
Active Aggregator Info:
Aggregator ID: 2
Number of ports: 1
Actor Key: 33
Partner Key: 1
Partner Mac Address: 00:00:00:00:00:00

Slave Interface: eth2-02
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:1c:7f:62:4c:8f
Aggregator ID: 1

Slave Interface: eth2-01
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:1c:7f:62:4c:8e
Aggregator ID: 2


While Cisco claims it is up:


# show etherchannel 1 summary

Number of channel-groups in use: 1
Number of aggregators: 1


Group     Port-channel   Protocol   Ports
2             Po2(SU)         LACP       Te1/0/1(P) Te2/0/1(P)



The bond2, same configuration on different card's 2x10G, connected to Nexus switch, actually works fine.


FYI: the data shown above is a doctored sample, I do not presently have access to the systems, but am fairly sure I am reproducing it accurately. Will update this question with live data next week.


If any of you have bumped into this one before, please share the solution or troubleshooting methods.