Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
MladenAntesevic
Collaborator

5600 SG R80.40 Interface issues?

Has anyone had issues with interfaces not coming UP as 1000M/Full on 5600 R80.40?  I have two 5600 in a cluster with identical topology, both connected to identically configured Cisco Nexus switches, where all the ports from one gateway are normally working as 1000M/Full but several ports from the other are only active as 100M/Half Duplex. If I try to force 1000M/Full, the interfaces will go down. I have such a issue on 5 ports connected to different devices and only two interface from the same SG5600 normally working as 1000M/Full. I have checked the cables and coresponding interfaces on the connected devices and everything is fine, but 5 interfaces on my 5600 are working only as 100M/Half:

CP2> show version all
Product version Check Point Gaia R80.40
OS build 294
OS kernel version 3.10.0-957.21.3cpx86_64
OS edition 64-bit
CP2> show interface eth5
state on
mac-addr xx:xx:xx:xx:xx:xx
type ethernet
link-state link down
mtu 1500
auto-negotiation on
speed N/A
ipv6-autoconfig Not configured
duplex N/A
monitor-mode Not configured
link-speed Not configured
comments
ipv4-address Not Configured
ipv6-address Not Configured
ipv6-local-link-address Not Configured
Statistics:
TX bytes:0 packets:0 errors:0 dropped:0 overruns:0 carrier:0
RX bytes:0 packets:0 errors:0 dropped:0 overruns:0 frame:0
[Expert@CP2:0]# ethtool -i eth5
driver: igb
version: 5.3.5.18
firmware-version:  0. 6-2
expansion-rom-version:
bus-info: 0000:04:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: no
 
For example, I have eth5 connected to the corresponding switch which does not allow 100M speed so interface is down. Other interfaces (eth1, eth2, eth3, eth4, eth6, eth7) are connected to devices which allow both 100M and 1000M speed and I have mixed result:
 
Interface eth1
    state on
    mac-addr xx:xx:xx:xx:xx:xx
    type ethernet
    link-state link up
    mtu 1500
    auto-negotiation on
    speed 100M
    ipv6-autoconfig Not configured
    duplex half
    monitor-mode Not configured
    link-speed Not configured
    comments outside
    ipv4-address xx.xx.xx.xx/xx
    ipv6-address Not Configured
    ipv6-local-link-address Not Configured
Statistics:
    TX bytes:35667905 packets:405184 errors:1 dropped:0 overruns:0 carrier:0
    RX bytes:563116538 packets:457466 errors:0 dropped:0 overruns:0 frame:0
Interface eth2
    state on
    mac-addr xx:xx:xx:xx:xx:xx
    type ethernet
    link-state link down
    mtu 1500
    auto-negotiation Not configured
    speed N/A
    ipv6-autoconfig Not configured
    duplex N/A
    monitor-mode Not configured
    link-speed Not configured
    comments
    ipv4-address Not Configured
    ipv6-address Not Configured
    ipv6-local-link-address Not Configured
Statistics:
    TX bytes:0 packets:0 errors:0 dropped:0 overruns:0 carrier:0
    RX bytes:0 packets:0 errors:0 dropped:0 overruns:0 frame:0
Interface eth3
    state on
    mac-addr xx:xx:xx:xx:xx:xx
    type ethernet
    link-state link up
    mtu 1500
    auto-negotiation Not configured
    speed 1000M
    ipv6-autoconfig Not configured
    duplex full
    monitor-mode Not configured
    link-speed Not configured
    comments
    ipv4-address Not Configured
    ipv6-address Not Configured
    ipv6-local-link-address Not Configured
Statistics:
    TX bytes:26408494 packets:97135 errors:0 dropped:0 overruns:0 carrier:0
    RX bytes:30682970 packets:104525 errors:0 dropped:0 overruns:0 frame:0
Interface eth4
    state on
    mac-addr xx:xx:xx:xx:xx:xx
    type ethernet
    link-state link up
    mtu 1500
    auto-negotiation Not configured
    speed 1000M
    ipv6-autoconfig Not configured
    duplex full
    monitor-mode Not configured
    link-speed Not configured
    comments
    ipv4-address Not Configured
    ipv6-address Not Configured
    ipv6-local-link-address Not Configured
Statistics:
    TX bytes:39517670 packets:938740 errors:0 dropped:0 overruns:0 carrier:0
    RX bytes:190616272 packets:3403862 errors:0 dropped:0 overruns:0 frame:0
Interface eth5
    state on
    mac-addr xx:xx:xx:xx:xx:xx
    type ethernet
    link-state link down
    mtu 1500
    auto-negotiation on
    speed N/A
    ipv6-autoconfig Not configured
    duplex N/A
    monitor-mode Not configured
    link-speed Not configured
    comments
    ipv4-address Not Configured
    ipv6-address Not Configured
    ipv6-local-link-address Not Configured
Statistics:
    TX bytes:0 packets:0 errors:0 dropped:0 overruns:0 carrier:0
    RX bytes:0 packets:0 errors:0 dropped:0 overruns:0 frame:0

Interface eth6
    state on
    mac-addr xx:xx:xx:xx:xx:xx
    type ethernet
    link-state link up
    mtu 1500
    auto-negotiation Not configured
    speed 100M
    ipv6-autoconfig Not configured
    duplex half
    monitor-mode Not configured
    link-speed Not configured
    comments
    ipv4-address Not Configured
    ipv6-address Not Configured
    ipv6-local-link-address Not Configured
Statistics:
    TX bytes:728 packets:9 errors:0 dropped:0 overruns:0 carrier:0
    RX bytes:3832 packets:42 errors:0 dropped:0 overruns:0 frame:0

Interface eth7
    state on
    mac-addr xx:xx:xx:xx:xx:xx
    type ethernet
    link-state link up
    mtu 1500
    auto-negotiation Not configured
    speed 1000M
    ipv6-autoconfig Not configured
    duplex full
    monitor-mode Not configured
    link-speed Not configured
    comments
    ipv4-address Not Configured
    ipv6-address Not Configured
    ipv6-local-link-address Not Configured
Statistics:
    TX bytes:661037290 packets:1624806 errors:0 dropped:0 overruns:0 carrier:0
    RX bytes:131111361 packets:1571111 errors:0 dropped:0 overruns:0 frame:0
9 Replies
PhoneBoy
Admin
Admin

Did these same appliances work previously on a different release with these switches?
If so, it may be a side effect of upgrading kernel and drivers in R80.40.
In which case, a TAC case may be in order.
MladenAntesevic
Collaborator

No, these are two brand new boxes and they have not been in production yet. I have not tried any other release on these boxes, I just performed upgrade from R80.30 (which was installed by default) to R80.40 using ISOMorphic tool.

PhoneBoy
Admin
Admin

May also be a hardware related issue.
Worth engaging the TAC in any case.
0 Kudos
Timothy_Hall
Champion
Champion

This may sound silly, but make sure you are using cat5e or cat6 cables, as cat5 and lower will only link up at 100Mbps.  The fact that it is going 100/half suggests some kind of autonegotiation issue as that is the default mode when there is no autonegotiation received.

Your issue could be related to LLDP although this is normally only a problem with the Mellanox NICs, try these commands on the Nexus interface side:

service unsupported-transceiver
no lldp transmit
no lldp receive

This is supposed to be fixed in R80.40 but is worth a try.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
MladenAntesevic
Collaborator

Actually, it was my first idea to check the cables, I was pretty confident this is a "cable" problem. After a lot of trying, changing cables and testing cables on different devices at the end I come to the conslusion all cables are fine, something else is root cause why interfaces are coming as 100M/Half only. Furthermore, I tried to disable autonegotiation on both sides (Cisco switch and Check Point) and to try 1000M/Full fixed on both sides, but unfortunatelly without any success, affected interfaces remain down.  Next step was to disable LLDP (which was actually disabled by default), also disabling CDP on Cisco side, but the original problem remains, still the same interfaces are working as 100M/Half only.

Just for clarification, I have two 5600s in a cluster, one cluster device is working OK, I believe the problem is limited just to my second 5600. So this second 5600 has 4 interfaces working as a 100M/Half and these connections are distributed, one towards my ISP (connected to ISP Cisco Catalyst 3750), two interfaces towards Cisco Nexus on the Inside and DMZ side and even one interface connected to primary 5600 is also 100M/Half. Additionally, on this device I have 3 interfaces working as 1000M/Full (two 1000M/Full interfaces connected to Cisco Nexus and one as a cluster sync).

I am using bonding, so if I have one 1000M/Full and one 100M/Half, the 100M/Half gets disabled. That was my second idea, to remove bonds, just to use physical interfaces, but the problem remains, I get the same "behavior" even without any bonding on Check Point and Cisco side.

On the other side, the problem is limited to my second cluster device only, my first 5600 has 6 interfaces in 1000M/Full state and just one 100M/Half because it is used as a cluster sync and obviously problematic device is "forcing" 100M/Half.

Thank you Timothy_Hall for a good hint, I did not know that interface is 100M/Half if no negotiation was received, so it actually brings me some new ideas. I am thinking to try some more deep debuging using mii-diag or ethtool tools.

 
 
0 Kudos
Timothy_Hall
Champion
Champion

Hmm sounds like you may have some kind of hardware issue with your secondary 5600.  Try swapping the set of switchports including the same cables between the two members.  So unplug the secondary from all its switchports, then plug the primary into those just-vacated switchports with its own cables, then plug the secondary into the primary's original ports with its own cables.  If the problem remains on the secondary that's a pretty good indication that it has a problem.  Other than perhaps running ethtool -i against the various interfaces on both the primary and secondary to ensure they are using the same controller hardware and driver version, it is probably time for a TAC case and likely RMA.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
MladenAntesevic
Collaborator

Yes, good idea, to switch the cables between 5600s, I will do it on Monday.

Actually, I was already switching the cables within the same "problematic" device, for example:

eth2 - 100M/Half

eth3 - 1000M/Full, working OK

eth2 & eth3 in the same bond connected to the Cisco Nexus (Port-channel is configured on a Cisco side).

I just switches the cables between eth2 and eth3 and the problem remains on eth2, so I still have the same situation, eth3 working OK and eth2 coming UP as a 100M/Half.

0 Kudos
MladenAntesevic
Collaborator

I swapped the switchports between two 5600s including the cables, so I took validated cables and switchports from the regular 5600 and connect them to my problematic 5600 and the same problem remains, my problematic 5600 has 4 ports working as a 100M/Half and my regular 5600 again has all ports 1000M/Full (except the sync interface between two 5600s).

0 Kudos
Timothy_Hall
Champion
Champion

Sounding more and more like hardware, although it could still be something in Gaia.  Try this on the problematic 5600:

1) Take a snapshot from the Gaia web interface, and export/save a copy to your desktop

2) Also take a backup of the box, and export/save a copy to your desktop

3) Scratch load the box from IsoMorphic/USB with R80.40

4) After it boots back up, configure the interfaces with a minimal configuration from clish and check interface speeds. 

My guess is you will still have the speed problem, and it is time for a RMA. Once you receive the new box revert the saved snapshot to it.

If the speed problem disappears after the fresh load, something is messed up with the NIC driver in Gaia for the interfaces in your original image, so now restore the backup (not snapshot) to your freshly-loaded 5600 (make sure to run through the post-installation process and then reload any hotfixes/Jumbo HFAs first).  I doubt the speed problem is caused by something in your configuration that the restore will bring back.  If the speed problem returns after the restore (unlikely) something is seriously wrong in your Gaia config (/config/active file) and you need to look there.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events