- Products
- Learn
- Local User Groups
- Partners
- More
MVP 2026: Submissions
Are Now Open!
What's New in R82.10?
Watch NowOverlap in Security Validation
Help us to understand your needs better
CheckMates Go:
Maestro Madness
Hi!
I was playing around with a 790, and wanted to compare throughput through the integrated LAN switch and through bridge-attached interfaces on the LAN side out of curiosity - but I was not expecting this bug/behaviour.
Has anyone else seen something similar? I've looked through a lot of articles and posts here, but not found anything similar (sk143232 sounded promising, but relevant interfaces do show RUNNING, and other discrepancies). I've tried looking at all the resolved issues and known limitations lists on releases since R77.20.80).
Setup A:
790 GW with LAN3 and LAN4 configured in bridge br0.
SW1: external L2 switch
host 1: 192.168.1.54 / 28:92:4a:38:32:74 connected to LAN4 using cable C
host 2: 192.168.1.8 / 34:64:a9:cf:74:50 connected to SW1 using cable A
SW1 connected to LAN3 using cable B
Since traffic is going through fw and not just kernel br0, firewall is configured with br0 in 192.168.1.0/24, anti-spoofing off and for good measure any->any:all allowed in rulebase.
Setup B:
host 1 connected to SW1 using cable B (connected directly with same cable, that LAN4 was connected with)
host 2 connected to SW1 using cable A (no change)
Setup B works, and is just to verify that cables/etc did work.
Setup C:
790 GW with LAN1 and LAN2 configured in LAN1_Switch.
SW1: external L2 switch
host 1 connected to LAN2 using cable C
host 2 connected to SW1 using cable A
SW1 connected to LAN1 using cable B
Setup C also works.
For setup A though, this is what happens.
When host 1 tries to ARP for host 2, the ARP does get forwarded,
tcpdump -nvvX -i br0
...
17:03:20.756186 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.8 tell 192.168.1.54, length 46
0x0000: 0001 0800 0604 0001 3464 a9cf 7450 c0a8 ........4d..tP..
0x0010: 0136 0000 0000 0000 c0a8 0108 0000 0000 .6..............
0x0020: 0000 0000 0000 0000 0000 0000 0000 ..............
and seen on host 2 and replied
17:03:20.758348 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.8 tell 192.168.1.54, length 46
0x0000: 0001 0800 0604 0001 3464 a9cf 7450 c0a8 ........4d..tP..
0x0010: 0136 0000 0000 0000 c0a8 0108 0000 0000 .6..............
0x0020: 0000 0000 0000 0000 0000 0000 0000 ..............
17:03:20.758395 ARP, Ethernet (len 6), IPv4 (len 4), Reply 192.168.1.8 is-at 28:92:4a:38:32:74, length 28
0x0000: 0001 0800 0604 0002 2892 4a38 3274 c0a8 ........(.J82t..
0x0010: 0108 3464 a9cf 7450 c0a8 0136 ..4d..tP...6
but the reply never reaches the 790.
As already said, if I connect host 1 directly to SW1 using the same cable that is used between SW1 and LAN4, it works without problems.
I tried looking at different places the packet could get dropped on the 790, and found nothing with the following:
For locally sourced traffic from the 790 to host 2, everything works fine. In fact, the br0 interface doesn't learn the host 2 MAC until locally sourced traffic is sent.
[Expert@fw01]# brctl showmacs br0 | egrep '(32:74|74:50)'
2 34:64:a9:cf:74:50 no 0.44
[Expert@fw01]# ping 192.168.1.8
PING 192.168.1.8 (192.168.1.8): 56 data bytes
64 bytes from 192.168.1.8: seq=0 ttl=64 time=0.763 ms
--- 192.168.1.8 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.763/0.763/0.763 ms
[Expert@fw01]# brctl showmacs br0 | egrep '(32:74|74:50)'
1 28:92:4a:38:32:74 no 2.02
2 34:64:a9:cf:74:50 no 0.10
I also made sure that the bridge was actually not blocking (even though STP is off it might have done something funny?)
[Expert@fw01]# brctl showstp br0 | grep -B 1 state
LAN3 (1)
port id 8001 state forwarding
--
LAN4 (2)
port id 8002 state forwarding
Right now the 790 is running R77.20.80 build 392 (yes, I know it's old), so I still have some testing to do...
Any pointers for how to debug dropped packets apart from the methods I've already tried are welcome 🙂
I'm thinking it might be a kernel issue in the bridge module, but I don't even want to speculate how heavily modified the archaic 3.10.20 in R77.20.80 is.
I've looked some more on this issue.
First off, the issue is reproducible on
Second, I found out more specifically how to trigger the behaviour. I noticed that the 790 was actually passing traffic in certain situations.
Reproducing drop scenario
The following findings are reproduced on build 392. Some of the steps look slightly different on other builds (for example, I didn't test the entire boot up sequence on other builds), but the general behaviour with packets getting dropped is reproducible on all four tested builds.
I set up a SPAN port on SW1, and confirmed that the ARP and ICMP replies were sent out from SW1 to the 790.
Temporary restoration of forwarding
Here is an example of the last step above
tcpdump -n -i br0 arp or icmp
...
21:35:29.108554 ARP, Request who-has 192.168.1.8 tell 192.168.1.54, length 46
21:35:30.132596 ARP, Request who-has 192.168.1.8 tell 192.168.1.54, length 46
21:35:31.156585 ARP, Request who-has 192.168.1.8 tell 192.168.1.54, length 46
21:35:32.180577 ARP, Request who-has 192.168.1.8 tell 192.168.1.54, length 46
21:35:33.204571 ARP, Request who-has 192.168.1.8 tell 192.168.1.54, length 46
21:35:34.228514 ARP, Request who-has 192.168.1.8 tell 192.168.1.54, length 46
21:35:34.228789 ARP, Reply 192.168.1.8 is-at 28:92:4a:38:32:74, length 46
21:35:34.229616 IP 192.168.1.54 > 192.168.1.8: ICMP echo request, id 2512, seq 46, length 64
21:35:34.229676 IP 192.168.1.54 > 192.168.1.8: ICMP echo request, id 2512, seq 47, length 64
21:35:34.229946 IP 192.168.1.8 > 192.168.1.54: ICMP echo reply, id 2512, seq 46, length 64
21:35:34.229993 IP 192.168.1.8 > 192.168.1.54: ICMP echo reply, id 2512, seq 47, length 64
21:35:35.230736 IP 192.168.1.54 > 192.168.1.8: ICMP echo request, id 2512, seq 48, length 64
21:35:35.231071 IP 192.168.1.8 > 192.168.1.54: ICMP echo reply, id 2512, seq 48, length 64
21:35:36.244910 IP 192.168.1.54 > 192.168.1.8: ICMP echo request, id 2512, seq 49, length 64
21:35:37.268842 IP 192.168.1.54 > 192.168.1.8: ICMP echo request, id 2512, seq 50, length 64
21:35:38.292854 IP 192.168.1.54 > 192.168.1.8: ICMP echo request, id 2512, seq 51, length 64
21:35:39.316810 IP 192.168.1.54 > 192.168.1.8: ICMP echo request, id 2512, seq 52, length 64
21:35:40.340907 IP 192.168.1.54 > 192.168.1.8: ICMP echo request, id 2512, seq 53, length 64
21:35:41.364814 IP 192.168.1.54 > 192.168.1.8: ICMP echo request, id 2512, seq 54, length 64
And with fw monitor
[vs_0][fw_0] LAN4:i[84]: 192.168.1.54 -> 192.168.1.8 (ICMP) len=84 id=9355
ICMP: type=8 code=0 echo request id=3661 seq=22
[vs_0][fw_0] LAN4:I[84]: 192.168.1.54 -> 192.168.1.8 (ICMP) len=84 id=9355
ICMP: type=8 code=0 echo request id=3661 seq=22
[vs_0][fw_0] LAN3:o[84]: 192.168.1.54 -> 192.168.1.8 (ICMP) len=84 id=9355
ICMP: type=8 code=0 echo request id=3661 seq=22
[vs_0][fw_0] LAN3:O[84]: 192.168.1.54 -> 192.168.1.8 (ICMP) len=84 id=9355
ICMP: type=8 code=0 echo request id=3661 seq=22
[vs_0][fw_0] LAN3:i[84]: 192.168.1.8 -> 192.168.1.54 (ICMP) len=84 id=5489
ICMP: type=0 code=0 echo reply id=3661 seq=22
[vs_0][fw_0] LAN3:I[84]: 192.168.1.8 -> 192.168.1.54 (ICMP) len=84 id=5489
ICMP: type=0 code=0 echo reply id=3661 seq=22
[vs_0][fw_0] LAN4:o[84]: 192.168.1.8 -> 192.168.1.54 (ICMP) len=84 id=5489
ICMP: type=0 code=0 echo reply id=3661 seq=22
[vs_0][fw_0] LAN4:O[84]: 192.168.1.8 -> 192.168.1.54 (ICMP) len=84 id=5489
ICMP: type=0 code=0 echo reply id=3661 seq=22
I thought that since I now had ICMP through too (now host 1 had an ARP entry of host 2), I could maybe see the drop reason - but unfortunately fw ctl zdebug +drop did not report any drops.
Looking closer on the drops
I thought that maybe some kind of cache/filter was put into place, so I started looking at how the bridge module was configured.
[Expert@fw01]# pwd
/proc/sys/net
[Expert@fw01]# tail bridge/*
==> bridge/bridge-nf-call-arptables <==
1
==> bridge/bridge-nf-call-ip6tables <==
1
==> bridge/bridge-nf-call-iptables <==
1
==> bridge/bridge-nf-filter-pppoe-tagged <==
0
==> bridge/bridge-nf-filter-vlan-tagged <==
0
==> bridge/bridge-nf-pass-vlan-input-dev <==
0
==> bridge/forwarding <==
1
It looked like br0 should send packets to netfilter-arptables and iptables (and per default also ebtables). Disabling these forwardings with
for f in bridge-nf-*; do echo 0 > $f; done
didn't have any effect though.
So I thought that maybe some rules were getting programmed in ebtables or iptables? If that is still active on a check point kernel... I had my doubts. But kernel config said something might be enabled? Not sure if it's enough though...
[Expert@fw01]# zcat /proc/config.gz | grep -i 'bridge_.*y$'
CONFIG_BRIDGE_NETFILTER=y
CONFIG_BRIDGE_IGMP_SNOOPING=y
[Expert@fw01]# zcat /proc/config.gz | grep -i 'netfilter.*y$'
CONFIG_NETFILTER=y
CONFIG_NETFILTER_ADVANCED=y
CONFIG_BRIDGE_NETFILTER=y
I succesfully placed some ebtables and iptables binaries and libs on the 790, but all of them reported kernel modules missing.
[Expert@fw01]# ./sbin/ebtables -t broute -L
modprobe: module ebtables not found
modprobe: failed to load module ebtables
The kernel doesn't support the ebtables 'broute' table.
[Expert@fw01]# ./sbin/ebtables -L
modprobe: module ebtables not found
modprobe: failed to load module ebtables
The kernel doesn't support the ebtables 'filter' table.
[Expert@fw01]# ./sbin/iptables -L -n
modprobe: module ip_tables not found
modprobe: failed to load module ip_tables
iptables v1.4.21: can't initialize iptables table `filter': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
I'm thinking maybe the CP kernel fw module is doing something similar to ebtables? But I have no idea how to inspect this.
The closest thing I found online to this, was this post on serverfault (Linux bridge (brctl) is dropping packets) but since normal ebtables isn't available, I'm out of luck to inspect this.
Is there a diagram similar to this one (packet flow in netfilter) for the SMB devices running embedded Gaia?
Sounds like you should involve the TAC here.
Yeah, I was thinking that too. Initially I wrote here just to see if someone else had experienced something similar, but then I persisted out of curiosity. Spending way too much spare time on it 🤔🤣
Based on how few have responded, I guess not many people set up their LAN ports in a bridge instead of using the embedded switch, and move around MACs at the same time.
If someone else had the issue I would be willing to reach out to TAC, but for now I'll just use the embedded switch though.
On a side note, since the bridge did work with devices connected on bootup, I managed to do the speed test I originally was looking to do. And it was quite decent, getting speeds around 680-930 Mbps for a single TCP session, with kiss_kthread slurping CPU. So it looks like the AV/IPS/etc blades are not hit (or use an optimized path) if packets just get switched, and not actually get routed.
It would be a pity if, after spending so much time on it, you do not report this issue to TAC !
That's true. I'll throw my SE/regional contact a mail, and see what TAC/R&D are willing to do. The thing is, the device is no longer under support (still fully licensed till november though), so TAC might not want to use time on it.
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
| User | Count |
|---|---|
| 5 | |
| 2 | |
| 2 | |
| 2 | |
| 1 | |
| 1 | |
| 1 |
Tue 16 Dec 2025 @ 05:00 PM (CET)
Under the Hood: CloudGuard Network Security for Oracle Cloud - Config and Autoscaling!Thu 18 Dec 2025 @ 10:00 AM (CET)
Cloud Architect Series - Building a Hybrid Mesh Security Strategy across cloudsTue 16 Dec 2025 @ 05:00 PM (CET)
Under the Hood: CloudGuard Network Security for Oracle Cloud - Config and Autoscaling!Thu 18 Dec 2025 @ 10:00 AM (CET)
Cloud Architect Series - Building a Hybrid Mesh Security Strategy across cloudsAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY