Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Matlu
Advisor
Jump to solution

Events on Interfaces.

Hello,

I would like to know in "file" the events related to the physical interfaces of a Firewall are stored.

If I have an eth1 interface and I "restart" it through the CLI with the commands set interface eth1 state off and set interface eth1 state on.

These events related to this action, where are they "stored"?
How can I read them?

Thanks for your comments.

0 Kudos
1 Solution

Accepted Solutions
Timothy_Hall
Champion Champion
Champion

You have a physical problem with the cable used with the eth3 interface, or there is a problem with the switch eth3 is attached to (switch is crashing, negotiation flap, etc).  Could be failing eth3 NIC hardware but that is less likely.  Replace the cable and look at logs on the switch to see if it is having problems.  You can track how many times eth3 has experienced a carrier transition since system boot by running the ifconfig eth3 command and looking at the "carrier" counter.  Ideally this should be zero.

That final message about VLAN 0 is not relevant to your problem, I assume it means that eth3 is using 802.1q tagging/trunking, and that filtering log indicates that untagged (VLAN 0) traffic will be ignored.  This is expected behavior because the mixture of tagged with untagged traffic on the same physical firewall interface is not supported.

If you have replaced the cable and verified the eth3-attached switch is stable, please provide the output of the following commands run from the firewall in expert mode for further analysis:

netstat -ni | grep eth3

ethtool -S eth3

 

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com

View solution in original post

0 Kudos
12 Replies
Bob_Zimmerman
Authority
Authority

clish commands typically end up in /var/log/messages. They'll also be sent to syslog servers you define at the OS level. The full command should be in a line like this:

Jan  5 19:10:06 2024 <hostname> clish[<PID>]: cmd by <user>: Processing : show snapshot AutoSnap20472 all (cmd md5: <hash>)
0 Kudos
Matlu
Advisor

Hello,

I was wondering about the logs where I could see the events of an interface, as I currently have a problem with an interface that is a passive member of my ClusterXL, and I see some "strange behaviour".

My ClusterXL is broken, but what the logs tell me, is that the eth3 port has been disconnected.
The weird thing is that the eth3 port, the client tells me it is connected and sees it "Linking".

Has anyone had this behaviour with any particular interface?

[Expert@can2-2:0]# less /var/log/messages.* | grep eth3
Jan 2 12:57:50 2024 can2-2 fwk: CLUS-110200-2: State change: STANDBY -> DOWN | Reason: Interface eth3 is down (disconnected / link down)
Jan 2 12:57:54 2024 can2-2 fwk: CLUS-120207-2: Local probing has started on interface: eth3
Jan 3 08:55:24 2024 can2-2 fwk: CLUS-110200-2: State change: STANDBY -> DOWN | Reason: Interface eth3 is down (disconnected / link down)
Jan 3 08:55:29 2024 can2-2 fwk: CLUS-120207-2: Local probing has started on interface: eth3
Jan 3 09:17:29 2024 can2-2 fwk: CLUS-110200-2: State change: STANDBY -> DOWN | Reason: Interface eth3 is down (disconnected / link down)
Jan 3 09:17:34 2024 can2-2 fwk: CLUS-120207-2: Local probing has started on interface: eth3
Jan 3 11:07:25 2024 can2-2 fwk: CLUS-110200-2: State change: STANDBY -> DOWN | Reason: Interface eth3 is down (disconnected / link down)
Jan 3 11:07:30 2024 can2-2 fwk: CLUS-120207-2: Local probing has started on interface: eth3
Jan 3 11:10:08 2024 can2-2 fwk: CLUS-110200-2: State change: STANDBY -> DOWN | Reason: Interface eth3 is down (disconnected / link down)
Jan 3 11:10:12 2024 can2-2 fwk: CLUS-120207-2: Local probing has started on interface: eth3
Jan 3 11:16:37 2024 can2-2 fwk: CLUS-110200-2: State change: STANDBY -> DOWN | Reason: Interface eth3 is down (disconnected / link down)
Jan 3 11:16:42 2024 can2-2 fwk: CLUS-120207-2: Local probing has started on interface: eth3
Jan 3 16:34:12 2024 can2-2 fwk: CLUS-110200-2: State change: STANDBY -> DOWN | Reason: Interface eth3 is down (disconnected / link down)
Jan 3 16:34:16 2024 can2-2 fwk: CLUS-120207-2: Local probing has started on interface: eth3
Jan 4 01:50:29 2024 can2-2 fwk: CLUS-110200-2: State change: STANDBY -> DOWN | Reason: Interface eth3 is down (disconnected / link down)
Jan 4 01:50:34 2024 can2-2 fwk: CLUS-120207-2: Local probing has started on interface: eth3
Jan 4 08:40:58 2024 can2-2 fwk: CLUS-110200-2: State change: STANDBY -> DOWN | Reason: Interface eth3 is down (disconnected / link down)
Jan 4 08:41:02 2024 can2-2 fwk: CLUS-120207-2: Local probing has started on interface: eth3
Jan 4 09:38:52 2024 can2-2 fwk: CLUS-110200-2: State change: STANDBY -> DOWN | Reason: Interface eth3 is down (disconnected / link down)
Jan 4 09:38:57 2024 can2-2 fwk: CLUS-120207-2: Local probing has started on interface: eth3
Jan 5 08:36:28 2024 can2-2 fwk: CLUS-110200-2: State change: STANDBY -> DOWN | Reason: Interface eth3 is down (disconnected / link down)
Jan 5 08:36:33 2024 can2-2 fwk: CLUS-120207-2: Local probing has started on interface: eth3



0 Kudos
Bob_Zimmerman
Authority
Authority

If you're looking for hardware events rather than logging of the clish commands entered by administrators, check dmesg (reads from a ring buffer in RAM, so old messages just disappear) and /var/log/messages. I just unplugged eth1 and plugged it back in on one of my personal boxes:

[Expert@DallasSA]# dmesg | grep eth1
[Fri Jan  5 21:38:48 2024] e1000e: eth1 NIC Link is Down
[Fri Jan  5 21:38:51 2024] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[Expert@DallasSA]# grep eth1 /var/log/messages
Jan  5 21:38:48 2024 DallasSA kernel: e1000e: eth1 NIC Link is Down
Jan  5 21:38:52 2024 DallasSA kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[Expert@DallasSA]# 
0 Kudos
Matlu
Advisor

I found these records with the suggested command.

[Expert@can2-2:0]# dmesg | grep eth3
[Fri Oct 27 22:05:18 2023] igb 0000:03:00.3: added PHC on eth3
[Fri Oct 27 22:05:18 2023] igb 0000:03:00.3: eth3: (PCIe:5.0GT/s:Width x4)
[Fri Oct 27 22:05:18 2023] igb 0000:03:00.3 eth3: MAC: 00:1c:7f:bc:4e:c0
[Fri Oct 27 22:05:18 2023] igb 0000:03:00.3: eth3: PBA No: 106300-000
[Fri Oct 27 22:05:29 2023] 8021q: adding VLAN 0 to HW filter on device eth3
[Fri Oct 27 22:05:33 2023] igb 0000:04:00.0 eth3: igb: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[Fri Oct 27 22:06:15 2023] device eth3 entered promiscuous mode
[Fri Oct 27 22:07:02 2023] device eth3 left promiscuous mode
[Fri Oct 27 22:07:02 2023] device eth3 entered promiscuous mode
[Fri Oct 27 22:07:26 2023] igb: eth3: igb_set_rss_hash_opt: enabling UDP RSS: fragmented packets may arrive out of order to the stack above
[Wed Dec 6 09:33:18 2023] igb 0000:04:00.0 eth3: igb: eth3 NIC Link is Down
[Wed Dec 6 09:33:21 2023] igb 0000:04:00.0 eth3: igb: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[Mon Dec 11 11:20:57 2023] igb 0000:04:00.0 eth3: igb: eth3 NIC Link is Down
[Tue Dec 12 10:46:32 2023] igb 0000:04:00.0 eth3: igb: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
[Tue Dec 12 10:46:44 2023] igb 0000:04:00.0 eth3: igb: eth3 NIC Link is Down
[Fri Jan 5 12:28:20 2024] 8021q: adding VLAN 0 to HW filter on device eth3

The last line seems strange to me.

Could it be an indicator of a problem in my GW? Or is it more an indicator of a trading problem in that port?





0 Kudos
Timothy_Hall
Champion Champion
Champion

You have a physical problem with the cable used with the eth3 interface, or there is a problem with the switch eth3 is attached to (switch is crashing, negotiation flap, etc).  Could be failing eth3 NIC hardware but that is less likely.  Replace the cable and look at logs on the switch to see if it is having problems.  You can track how many times eth3 has experienced a carrier transition since system boot by running the ifconfig eth3 command and looking at the "carrier" counter.  Ideally this should be zero.

That final message about VLAN 0 is not relevant to your problem, I assume it means that eth3 is using 802.1q tagging/trunking, and that filtering log indicates that untagged (VLAN 0) traffic will be ignored.  This is expected behavior because the mixture of tagged with untagged traffic on the same physical firewall interface is not supported.

If you have replaced the cable and verified the eth3-attached switch is stable, please provide the output of the following commands run from the firewall in expert mode for further analysis:

netstat -ni | grep eth3

ethtool -S eth3

 

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
Matlu
Advisor

Hello,

Currently, we have not yet tested replacing the patchcord in Eth3 (we are programming a window for that activity).

At the moment, the results of the "netstat -ni | grep eth3" and "ethtool -S eth3", show me the following (I attach it in the post)

From these command results, are there any "indicators" that are "important" to take into account in these scenarios?

What seems strange to me, is the following:

The ARP table of the Cluster machine, which is now running, shows me several "entries" (I get the impression, that it is connected to a SW).

But the ARP table of the "damaged" device, according to my client, the Eth3 interface, is "connected" to an Exinda device (this damaged device does not show me any entry in the ARP table).

The simple fact that it is connected to a device that is not a SW, could be a problem?

Greetings.

0 Kudos
the_rock
Legend
Legend

Hey bro,

I dont see any CRC errors there, so probably not HW issue...

Andy

0 Kudos
Matlu
Advisor

If the functioning member has the Eth3 Interface connected to a SW.

Does it mean that the other member of the Cluster "should" also have the Eth3 Interface connected to the same SW?

The member that works, the ARP TABLE shows me quite a few entries related to the Eth3 interface.

However, the member that is not working does not show me any entries in the ARP TABLE.

My customer told me, that the "Eth3 Interface" of the damaged equipment, is connected to an Exinda.
So, that could already be a problem, right?

Both Eth3 interfaces of both machines should be connected to the same machines, or not necessarily?

---------------------------------------------------------------

[Expert@can2-2:0]# arp -a | grep eth3
[Expert@can2-2:0]#
[Expert@can2-2:0]#

---------------------------------------------------------------

[Expert@can2-1:0]# arp -a | grep eth3
? (192.168.200.54) at 00:50:56:8c:2c:7e [ether] on eth3
? (192.168.200.32) at 00:50:56:8c:ed:84 [ether] on eth3
? (192.168.200.30) at 00:0c:29:b9:d1:c3 [ether] on eth3
? (192.168.200.6) at 00:50:56:9b:29:f9 [ether] on eth3
? (192.168.200.110) at <incomplete> on eth3
? (192.168.200.1) at <incomplete> on eth3
? (192.168.200.103) at <incomplete> on eth3
? (192.168.200.58) at 00:50:56:84:3b:d0 [ether] on eth3
? (192.168.200.104) at <incomplete> on eth3
? (192.168.200.102) at <incomplete> on eth3
? (192.168.200.80) at <incomplete> on eth3
? (192.168.200.51) at 00:50:56:8c:90:52 [ether] on eth3
? (192.168.200.29) at 00:0c:29:b9:d1:c3 [ether] on eth3
? (192.168.200.5) at b8:11:4b:ad:4e:ca [ether] on eth3
? (192.168.200.187) at 00:50:56:9f:00:b8 [ether] on eth3
? (192.168.200.52) at <incomplete> on eth3
? (192.168.200.4) at <incomplete> on eth3
? (192.168.200.70) at <incomplete> on eth3
? (192.168.200.43) at <incomplete> on eth3

Greetings.

0 Kudos
the_rock
Legend
Legend

Definitely should match, otherwise, if there is failover, it wont work.

0 Kudos
the_rock
Legend
Legend

Its possible cable issue as Tim mentioned, so if you can send output of those commands he gave, it would confirm for sure.

Andy

0 Kudos
JozkoMrkvicka
Mentor
Mentor

What is HW & SW version on your end ?

What is HW & SW version on a device where cable from eth3 is connected to ?

There might be an issue with SFP on other side.

I assume that eth3 on your side is native RJ45 copper without any SFP inserted there - please confirm.

Kind regards,
Jozko Mrkvicka
0 Kudos
the_rock
Legend
Legend

You can try /var/log/audit/audit.log as well

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events