Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Michal_W_old
Participant

700 appliance dropping ARP through br0

Hi!

I was playing around with a 790, and wanted to compare throughput through the integrated LAN switch and through bridge-attached interfaces on the LAN side out of curiosity - but I was not expecting this bug/behaviour.

Has anyone else seen something similar? I've looked through a lot of articles and posts here, but not found anything similar (sk143232 sounded promising, but relevant interfaces do show RUNNING, and other discrepancies). I've tried looking at all the resolved issues and known limitations lists on releases since R77.20.80).

Setup A:
790 GW with LAN3 and LAN4 configured in bridge br0.
SW1: external L2 switch
host 1: 192.168.1.54 / 28:92:4a:38:32:74 connected to LAN4 using cable C
host 2: 192.168.1.8 / 34:64:a9:cf:74:50 connected to SW1 using cable A
SW1 connected to LAN3 using cable B

Since traffic is going through fw and not just kernel br0, firewall is configured with br0 in 192.168.1.0/24, anti-spoofing off and for good measure any->any:all allowed in rulebase.

Setup B:
host 1 connected to SW1 using cable B (connected directly with same cable, that LAN4 was connected with)
host 2 connected to SW1 using cable A (no change)

Setup B works, and is just to verify that cables/etc did work.

Setup C:
790 GW with LAN1 and LAN2 configured in LAN1_Switch.
SW1: external L2 switch
host 1 connected to LAN2 using cable C
host 2 connected to SW1 using cable A
SW1 connected to LAN1 using cable B

Setup C also works.

For setup A though, this is what happens.

When host 1 tries to ARP for host 2, the ARP does get forwarded,

 

tcpdump -nvvX -i br0
...
17:03:20.756186 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.8 tell 192.168.1.54, length 46
        0x0000:  0001 0800 0604 0001 3464 a9cf 7450 c0a8  ........4d..tP..
        0x0010:  0136 0000 0000 0000 c0a8 0108 0000 0000  .6..............
        0x0020:  0000 0000 0000 0000 0000 0000 0000       ..............

 

and seen on host 2 and replied

 

17:03:20.758348 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.8 tell 192.168.1.54, length 46
	0x0000:  0001 0800 0604 0001 3464 a9cf 7450 c0a8  ........4d..tP..
	0x0010:  0136 0000 0000 0000 c0a8 0108 0000 0000  .6..............
	0x0020:  0000 0000 0000 0000 0000 0000 0000       ..............
17:03:20.758395 ARP, Ethernet (len 6), IPv4 (len 4), Reply 192.168.1.8 is-at 28:92:4a:38:32:74, length 28
	0x0000:  0001 0800 0604 0002 2892 4a38 3274 c0a8  ........(.J82t..
	0x0010:  0108 3464 a9cf 7450 c0a8 0136            ..4d..tP...6

 

 but the reply never reaches the 790.

As already said, if I connect host 1 directly to SW1 using the same cable that is used between SW1 and LAN4, it works without problems.

I tried looking at different places the packet could get dropped on the 790, and found nothing with the following:

  • tcpdump on -i br0 does not see the ARP reply
  • tcpdump on -i LAN4 (I even did -i LAN3 for the kicks, but this of course didn't work either)
  • fw ctl zdebug + drop didn't report any ARP drops in both fwaccel off/on (but I'm actually not sure it does that under normal circumstances

For locally sourced traffic from the 790 to host 2, everything works fine. In fact, the br0 interface doesn't learn the host 2 MAC until locally sourced traffic is sent.

 

[Expert@fw01]# brctl showmacs br0 | egrep '(32:74|74:50)'
  2     34:64:a9:cf:74:50       no                 0.44
[Expert@fw01]# ping 192.168.1.8                          
PING 192.168.1.8 (192.168.1.8): 56 data bytes
64 bytes from 192.168.1.8: seq=0 ttl=64 time=0.763 ms

--- 192.168.1.8 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.763/0.763/0.763 ms
[Expert@fw01]# brctl showmacs br0 | egrep '(32:74|74:50)'
  1     28:92:4a:38:32:74       no                 2.02
  2     34:64:a9:cf:74:50       no                 0.10

 

 I also made sure that the bridge was actually not blocking (even though STP is off it might have done something funny?)

 

[Expert@fw01]# brctl showstp br0 | grep -B 1 state
LAN3 (1)
 port id                8001                    state                forwarding
--
LAN4 (2)
 port id                8002                    state                forwarding

 

 

Right now the 790 is running R77.20.80 build 392 (yes, I know it's old), so I still have some testing to do...

  • Try through br0 but without SW1 (but it doesn't look like packets are mangled between the 790 and host 2 looking at pcaps, also locally sourced traffic on the 790 works and setup C works too)
  • Try with a more recent release (R77.20.87 build 3004 maybe? sounds like there are problems with newer... or maybe even try older first, build 755 -> 855 then 3004?)

Any pointers for how to debug dropped packets apart from the methods I've already tried are welcome 🙂

I'm thinking it might be a kernel issue in the bridge module, but I don't even want to speculate how heavily modified the archaic 3.10.20 in R77.20.80 is.

0 Kudos
5 Replies
Michal_W_old
Participant

I've looked some more on this issue.

First off, the issue is reproducible on

  • R77.20.80 (990172392)
  • R77.20.85 (990172755)
  • R77.20.86 (990172855)
  • R77.20.87 (990173004)

Second, I found out more specifically how to trigger the behaviour. I noticed that the 790 was actually passing traffic in certain situations.

 

Reproducing drop scenario

The following findings are reproduced on build 392. Some of the steps look slightly different on other builds (for example, I didn't test the entire boot up sequence on other builds), but the general behaviour with packets getting dropped is reproducible on all four tested builds.

  1. Host 1 and host 2 are connected to SW1
  2. Host 1 is moved to LAN4 of 790, which is off
  3. When the 790 is booted with cables inserted in the bridged setup (A), the br0 is passing traffic. There comes a "hole" in forwarding during bootup, which I assume is because the 790 defaults with interfaces in the integrated switch, and when CP processes get started, the two specified interfaces get disabled/then moved to br0. Anyway - things work fine! It keeps on running like this for the entire duration of this step - I haven't tested it extensively though, as I was interested in the next step.
  4. Disconnecting LAN3 from SW1, and reconnecting again to SW1 (both immediately after and also with a delay of 15 min), and br0 still forwards traffic.
  5. With LAN3 connected to SW1, host 1 is disconnected from LAN4, and connected to another port on SW1. Then, host 1 is moved back to LAN4. From now on, the forwarding behaviour of br0 is as observed in the original post. That is, br0 forwards traffic from LAN4 (host 1) to LAN3 (SW1/host 2), but traffic from host 2 is not seen any more (with some exception, see further below).
  6. This behaviour can be replicated to further LAN ports joined to br0. So joining LAN5 and LAN6 to br0, moving SW1 from LAN3 to LAN5, and return traffic is received on LAN5. Move host 1 to SW1 and back to LAN3, and return traffic to LAN5 is killed off once more.
  7. In some rare cases, it's possible to have br0 receive and forward 1-5 return packets(!), before dropping traffic again. Unplugging LAN3 from SW1, waiting for it to go down, then re-connecting it. It's reproducible, but hard to do - as other packets received first will then be allowed, and the ARP reply and ICMP replies will then be dropped. But it *is* reproducible. I have only been successful doing so on build 392 though.

I set up a SPAN port on SW1, and confirmed that the ARP and ICMP replies were sent out from SW1 to the 790.

 

Temporary restoration of forwarding

Here is an example of the last step above

 

tcpdump -n -i br0 arp or icmp
...
21:35:29.108554 ARP, Request who-has 192.168.1.8 tell 192.168.1.54, length 46
21:35:30.132596 ARP, Request who-has 192.168.1.8 tell 192.168.1.54, length 46
21:35:31.156585 ARP, Request who-has 192.168.1.8 tell 192.168.1.54, length 46
21:35:32.180577 ARP, Request who-has 192.168.1.8 tell 192.168.1.54, length 46
21:35:33.204571 ARP, Request who-has 192.168.1.8 tell 192.168.1.54, length 46
21:35:34.228514 ARP, Request who-has 192.168.1.8 tell 192.168.1.54, length 46
21:35:34.228789 ARP, Reply 192.168.1.8 is-at 28:92:4a:38:32:74, length 46
21:35:34.229616 IP 192.168.1.54 > 192.168.1.8: ICMP echo request, id 2512, seq 46, length 64
21:35:34.229676 IP 192.168.1.54 > 192.168.1.8: ICMP echo request, id 2512, seq 47, length 64
21:35:34.229946 IP 192.168.1.8 > 192.168.1.54: ICMP echo reply, id 2512, seq 46, length 64
21:35:34.229993 IP 192.168.1.8 > 192.168.1.54: ICMP echo reply, id 2512, seq 47, length 64
21:35:35.230736 IP 192.168.1.54 > 192.168.1.8: ICMP echo request, id 2512, seq 48, length 64
21:35:35.231071 IP 192.168.1.8 > 192.168.1.54: ICMP echo reply, id 2512, seq 48, length 64
21:35:36.244910 IP 192.168.1.54 > 192.168.1.8: ICMP echo request, id 2512, seq 49, length 64
21:35:37.268842 IP 192.168.1.54 > 192.168.1.8: ICMP echo request, id 2512, seq 50, length 64
21:35:38.292854 IP 192.168.1.54 > 192.168.1.8: ICMP echo request, id 2512, seq 51, length 64
21:35:39.316810 IP 192.168.1.54 > 192.168.1.8: ICMP echo request, id 2512, seq 52, length 64
21:35:40.340907 IP 192.168.1.54 > 192.168.1.8: ICMP echo request, id 2512, seq 53, length 64
21:35:41.364814 IP 192.168.1.54 > 192.168.1.8: ICMP echo request, id 2512, seq 54, length 64

 

And with fw monitor

 

[vs_0][fw_0] LAN4:i[84]: 192.168.1.54 -> 192.168.1.8 (ICMP) len=84 id=9355
ICMP: type=8 code=0 echo request id=3661 seq=22
[vs_0][fw_0] LAN4:I[84]: 192.168.1.54 -> 192.168.1.8 (ICMP) len=84 id=9355
ICMP: type=8 code=0 echo request id=3661 seq=22
[vs_0][fw_0] LAN3:o[84]: 192.168.1.54 -> 192.168.1.8 (ICMP) len=84 id=9355
ICMP: type=8 code=0 echo request id=3661 seq=22
[vs_0][fw_0] LAN3:O[84]: 192.168.1.54 -> 192.168.1.8 (ICMP) len=84 id=9355
ICMP: type=8 code=0 echo request id=3661 seq=22
[vs_0][fw_0] LAN3:i[84]: 192.168.1.8 -> 192.168.1.54 (ICMP) len=84 id=5489
ICMP: type=0 code=0 echo reply id=3661 seq=22
[vs_0][fw_0] LAN3:I[84]: 192.168.1.8 -> 192.168.1.54 (ICMP) len=84 id=5489
ICMP: type=0 code=0 echo reply id=3661 seq=22
[vs_0][fw_0] LAN4:o[84]: 192.168.1.8 -> 192.168.1.54 (ICMP) len=84 id=5489
ICMP: type=0 code=0 echo reply id=3661 seq=22
[vs_0][fw_0] LAN4:O[84]: 192.168.1.8 -> 192.168.1.54 (ICMP) len=84 id=5489
ICMP: type=0 code=0 echo reply id=3661 seq=22

 

I thought that since I now had ICMP through too (now host 1 had an ARP entry of host 2), I could maybe see the drop reason - but unfortunately fw ctl zdebug +drop did not report any drops.

 

Looking closer on the drops

I thought that maybe some kind of cache/filter was put into place, so I started looking at how the bridge module was configured.

 

[Expert@fw01]# pwd
/proc/sys/net
[Expert@fw01]# tail bridge/*
==> bridge/bridge-nf-call-arptables <==
1

==> bridge/bridge-nf-call-ip6tables <==
1

==> bridge/bridge-nf-call-iptables <==
1

==> bridge/bridge-nf-filter-pppoe-tagged <==
0

==> bridge/bridge-nf-filter-vlan-tagged <==
0

==> bridge/bridge-nf-pass-vlan-input-dev <==
0

==> bridge/forwarding <==
1

 

It looked like br0 should send packets to netfilter-arptables and iptables (and per default also ebtables). Disabling these forwardings with

 

for f in bridge-nf-*; do echo 0 > $f; done

 

didn't have any effect though.

So I thought that maybe some rules were getting programmed in ebtables or iptables? If that is still active on a check point kernel... I had my doubts. But kernel config said something might be enabled? Not sure if it's enough though...

 

[Expert@fw01]# zcat /proc/config.gz | grep -i 'bridge_.*y$'  
CONFIG_BRIDGE_NETFILTER=y
CONFIG_BRIDGE_IGMP_SNOOPING=y
[Expert@fw01]# zcat /proc/config.gz | grep -i 'netfilter.*y$'
CONFIG_NETFILTER=y
CONFIG_NETFILTER_ADVANCED=y
CONFIG_BRIDGE_NETFILTER=y

 

I succesfully placed some ebtables and iptables binaries and libs on the 790, but all of them reported kernel modules missing.

 

[Expert@fw01]# ./sbin/ebtables -t broute -L
modprobe: module ebtables not found
modprobe: failed to load module ebtables
The kernel doesn't support the ebtables 'broute' table.
[Expert@fw01]# ./sbin/ebtables -L          
modprobe: module ebtables not found
modprobe: failed to load module ebtables
The kernel doesn't support the ebtables 'filter' table.
[Expert@fw01]# ./sbin/iptables -L -n
modprobe: module ip_tables not found
modprobe: failed to load module ip_tables
iptables v1.4.21: can't initialize iptables table `filter': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.

 

I'm thinking maybe the CP kernel fw module is doing something similar to ebtables? But I have no idea how to inspect this.

The closest thing I found online to this, was this post on serverfault (Linux bridge (brctl) is dropping packets) but since normal ebtables isn't available, I'm out of luck to inspect this.

Is there a diagram similar to this one (packet flow in netfilter) for the SMB devices running embedded Gaia?

0 Kudos
PhoneBoy
Admin
Admin

Sounds like you should involve the TAC here.

0 Kudos
Michal_W_old
Participant

Yeah, I was thinking that too. Initially I wrote here just to see if someone else had experienced something similar, but then I persisted out of curiosity. Spending way too much spare time on it 🤔🤣

Based on how few have responded, I guess not many people set up their LAN ports in a bridge instead of using the embedded switch, and move around MACs at the same time.

If someone else had the issue I would be willing to reach out to TAC, but for now I'll just use the embedded switch though.

On a side note, since the bridge did work with devices connected on bootup, I managed to do the speed test I originally was looking to do. And it was quite decent, getting speeds around 680-930 Mbps for a single TCP session, with kiss_kthread slurping CPU. So it looks like the AV/IPS/etc blades are not hit (or use an optimized path) if packets just get switched, and not actually get routed.

0 Kudos
G_W_Albrecht
Legend
Legend

It would be a pity if, after spending so much time on it, you do not report this issue to TAC !

CCSE CCTE CCSM SMB Specialist
0 Kudos
Michal_W_old
Participant

That's true. I'll throw my SE/regional contact a mail, and see what TAC/R&D are willing to do. The thing is, the device is no longer under support (still fully licensed till november though), so TAC might not want to use time on it.

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events