- Products
- Learn
- Local User Groups
- Partners
- More
Firewall Uptime, Reimagined
How AIOps Simplifies Operations and Prevents Outages
Introduction to Lakera:
Securing the AI Frontier!
Check Point Named Leader
2025 Gartner® Magic Quadrant™ for Hybrid Mesh Firewall
HTTPS Inspection
Help us to understand your needs better
CheckMates Go:
SharePoint CVEs and More!
Hello Checkmates!
I currently have a dilemma right now, wherein we're attempting to replace a 5200 NGFW to a 5400 model, with the exact same configuration aside from an interface wherein the existing 5200 is connected to our Smart-1 appliance. (To visualize, the 5200 has 192.168.0.2, while the 5400 has 192.168.0.3, the rest has the same configuration in GAiA, routing and interfaces)
The topology is a two tier topology wherein a 5400 is acting as an external firewall, and we're attempting the existing 5200 appliance which is acting as an internal firewall to a 5400.
When we're attempting to cutover to the 5400, east-west traffic is working as intended, but connectivity from the internal 5400 to the external 5400 is non-existent. Traceroutes is terminated to where the connectivity of both internal and external firewall is.
Do take note that necessary changes were made as well, like routes and policies to accommodate the IP address that's being used by the 5400.
I would like to know if possibly this is due to my ARP tables looking for the MAC address of the previous 5200? Or is it something else possibly. If you guys have similar experiences like this hoping for your input as it had me dumbfounded as to possibly why its not working.
Hoping for the communities insight on this one.
Maybe simple diagram would help us further, but I do agree, ARP sounds like a logical reason to me as well.
Andy
You can also run arp -a from expert mode to verify entries seen.
Andy
Yes I have the initial arp -a output extracted now and will just wait for the scheduled downtime for the in-line activity. Will get back to this thread in a few hours for the update 🙂
Unless someone's configured a static ARP entry you shouldn't be having an ARP issue - confirm that with 'ip neigh' or 'arp -a' from expert mode. If ARP is correct, start looking at tcpdumps to see if the traffic is leaving the internal gateway the right way and if it's arriving at the external gateway.
We have arp entries for port forwarding on the external firewall, but it's not related to the connectivity between the internal and external firewall. I will still check the arp tables after the cutover and see if the arp tables are updated, if not, i can just perform a clear arp in clish correct?
I just realized as well that you can change the mac address in GAiA, would it be also plausible to copy the mac address on the 5200 to the 5400's equivalent interface?
From my lab:
[Expert@CP-STANDALONE:0]#
[Expert@CP-STANDALONE:0]# ip neigh
172.16.10.128 dev eth0 lladdr 00:0c:29:2e:c1:7a STALE
172.16.10.111 dev eth0 lladdr 50:06:00:05:00:00 STALE
172.16.10.1 dev eth0 lladdr e8:1c:ba:4e:89:87 DELAY
172.16.10.199 dev eth0 lladdr 50:06:00:0e:00:00 STALE
172.16.10.126 dev eth0 lladdr 00:0c:29:27:56:d6 STALE
[Expert@CP-STANDALONE:0]# arp -a
? (172.16.10.128) at 00:0c:29:2e:c1:7a [ether] on eth0
? (172.16.10.111) at 50:06:00:05:00:00 [ether] on eth0
? (172.16.10.1) at e8:1c:ba:4e:89:87 [ether] on eth0
? (172.16.10.199) at 50:06:00:0e:00:00 [ether] on eth0
? (172.16.10.126) at 00:0c:29:27:56:d6 [ether] on eth0
[Expert@CP-STANDALONE:0]# ip -s -s neigh flush all
172.16.10.128 dev eth0 lladdr 00:0c:29:2e:c1:7a used 51318/51322/51317 probes 0 STALE
172.16.10.111 dev eth0 lladdr 50:06:00:05:00:00 used 43622/43621/43596 probes 1 STALE
172.16.10.1 dev eth0 lladdr e8:1c:ba:4e:89:87 ref 1 used 0/0/0 probes 1 DELAY
172.16.10.199 dev eth0 lladdr 50:06:00:0e:00:00 used 54871/43189/43165 probes 4 STALE
172.16.10.126 dev eth0 lladdr 00:0c:29:27:56:d6 used 13042/13037/13012 probes 1 STALE
*** Round 1, deleting 5 entries ***
*** Flush is complete after 1 round ***
[Expert@CP-STANDALONE:0]# arp -a
? (172.16.10.1) at e8:1c:ba:4e:89:87 [ether] on eth0
https://linux-audit.com/how-to-clear-the-arp-cache-on-linux/
Its weird, as ARP tables are updated correctly, but now it shows ip spoofing as the cause.
0.3,0,1>;
@;994912320;[cpu_0];[SIM-207388730];sim_pkt_send_drop_notification: sending single drop notification, conn: <20.20.0.4,23046,20.20.0.3,0,1>;
@;994912321;[cpu_0];[SIM-207388730];do_packet_finish: SIMPKT_IN_DROP vsid=0, conn:<20.20.0.4,23046,20.20.0.3,0,1>;
@;994912321;[cpu_0];[SIM-207388730];pkt_handle_no_match: packet dropped (spoofed address), conn: <20.20.0.4,65535,20.20.0.3,0,1>;
@;994912321;[cpu_0];[SIM-207388730];sim_pkt_send_drop_notification: (0,0) received drop, reason: Anti-Spoofing, conn: <20.20.0.4,65535,20.20.0.3,0,1>;
@;994912321;[cpu_0];[SIM-207388730];sim_pkt_send_drop_notification: sending packet dropped notification drop mode: 0 debug mode: 1 send as is: 0 track_lvl: -1, conn: <20.20.0.4,65535,20.20.0.3,0,1>;
@;994912321;[cpu_0];[SIM-207388730];sim_pkt_send_drop_notification: sending single drop notification, conn: <20.20.0.4,65535,20.20.0.3,0,1>;
@;994912322;[cpu_0];[SIM-207388730];do_packet_finish: SIMPKT_IN_DROP vsid=0, conn:<20.20.0.4,65535,20.20.0.3,0,1>;
@;994912343;[cpu_0];[SIM-207388730];pkt_handle_no_match: packet dropped (spoofed address), conn: <20.20.0.4,23046,20.20.0.3,0,1>;
@;994912343;[cpu_0];[SIM-207388730];sim_pkt_send_drop_notification: (0,0) received drop, reason: Anti-Spoofing, conn: <20.20.0.4,23046,20.20.0.3,0,1>;
@;994912343;[cpu_0];[SIM-207388730];sim_pkt_send_drop_notification: sending packet dropped notification drop mode: 0 debug mode: 1 send as is: 0 track_lvl: -1, conn: <20.20.0.4,23046,20.20.0.3,0,1>;
@;994912343;[cpu_0];[SIM-207388730];sim_pkt_send_drop_notification: sending single drop notification, conn: <20.20.0.4,23046,20.20.0.3,0,1>;
@;994912344;[cpu_0];[SIM-207388730];do_packet_finish: SIMPKT_IN_DROP vsid=0, conn:<20.20.0.4,23046,20.20.0.3,0,1>;
@;994912403;[cpu_0];[SIM-207388730];pkt_handle_no_match: packet dropped (spoofed address), conn: <20.20.0.4,23046,20.20.0.3,0,1>;
@;994912403;[cpu_0];[SIM-207388730];sim_pkt_send_drop_notificaPKT_IN_DROP vsid=0, conn:<20.20.0.4,23046,20.20.0.3,0,1>;
Take note that we haven't experienced it on the previous setup with the 5200. We're currently adding the 20.20.0.0/24 segment to the topology.
In such scenario, you can try make an exception for anti spoofing, or simply set it to detect or disable and install policy, test.
Andy
Currently performed this but the spoofing still persists, troubleshooting at the moment 🙂
Are you allowed to do remote?
Andy
Unfortunately I cant, I don't have the spoofing issue now, but connection from internal to external firewall's finnicky, as all traffic are getting dropped by the cleanup. I'll update my findings as soon as I can see one.
Thats fair. hey, any way you can send basic diagram, just scribble something and point out EXACTLY whats failing.
Andy
We're really stumped right now. Here's the requested diagram:
We're encountering these types of logs when we cutover to our new setup
While when we go back to the working old topology, we have this:
We're still checking as to where we might have missed a configuration.
I totally see what @Timothy_Hall is saying.
Network defined by routes - The gateway dynamically calculates the topology behind this interface. If the network changes, there is no need to click "Get Interfaces" and install a policy.
One more thing that just came to my mind...since 20.20.20.x is Azure range, MAKE SURE nothing on that end may had changed.
Andy
Do you have "network defined by routes" set on all your internal interfaces for anti-spoofing topology? If you do and are encountering anti-spoofing issues your routing table on the new gateway is wrong or missing something, full stop.
Also please post a redacted log card of the anti-spoofing drop, and look closely at the "interface" part of the drop log. Next to the interface name where the anti-spoofing drop occurred, is the arrow pointing down (meaning inbound) or up (meaning outbound)? While anti-spoofing drops normally occur inbound, it is not commonly known that they can also occur outbound (up arrow) which will stymie your troubleshooting if you aren't aware of it.
For testing purposes you can disable all anti-spoofing enforcement "on the fly" as mentioned in my article below but if your routing is wrong, doing so will not make a difference and things will still not work:
Hello @Timothy_Hall,
We have resolved the spoofing issue by disabling anti-spoofing on the interfaces where the internal and external firewalls connect. Please see screenshots below:
Internal Firewall:
External firewall:
While here's what happens when we do the cutover:
It's so unusual as when we cutover to the 5400 (INTERNAL-FW1), it just shows logs originating from public ip addresses, while normal traffic shows as normal (Internal_FWA)
I'm really lost right now. The only difference is that one IP address, the rest, even the routes, is the same
We cannot upgrade FWA right now as it's the only working internal firewall, so aside from the ip address, the other difference is the version, but i think that wouldn't be the case.
Not that we dont believe you config is the same, BUT...to be 100% positive, here is what I would personally do. Run below on BOTH firewalls and compare in notepad++ once they are off the firewalls (you can give it any file name and send to any dir, I usually give hostname and current date)
from expert:
clish -c "show configuration" > /var/log/old_fw_config_June13_2024.txt
new fw:
clish -c "show configuration" > /var/log/new_fw_config_June13_2024.txt
Compare and see what you get.
Andy
Glad to hear that it works, but having anti-spoofing disabled long-term is not where you want to be. Having to do that to get things working would indicate that traffic is not routing the way you think it is, and disabling anti-spoofing is now possibly allowing ICMP redirects to "correct" the situation for you. Relying on this redirect mechanism to keep things running is notoriously unstable so watch out, here are the pages covering this from my last book (no I am not a skilled graphic artist, this is why I work in IT):
Hello @Timothy_Hall I do agree with this. We'll resolve the spoofing issue soon when we'll be able to correctly link both internal and external firewalls. Right now, we were able to successfully allow traffic from the internal firewall to the external firewall, there was some routes that was adjusted and that was the fix, but now we're encountering identity issues as for some reason, we can't authenticate to our identity servers anymore. AD query shows bad credentials. We checked some KB articles and was shown the hardening DCOM as a fix, but as soon as we checked the AD server it was already performed. and for some reason there's no other documentation related to it.
We will push through though, and we will successfully set this thing up we just had to set back and check it, I think. We reverted back for the mean time to the previous working setup so that the blocking through identity will work.
No worries. I would still do clish -c commands I sent, it would give you good idea when comparing 2 files.
For IA, I would 100% go with collector, see below post.
Andy
https://community.checkpoint.com/t5/Security-Gateways/New-IA-Implementation/m-p/185851#M34184
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
User | Count |
---|---|
23 | |
12 | |
9 | |
8 | |
8 | |
6 | |
5 | |
5 | |
4 | |
4 |
Wed 22 Oct 2025 @ 11:00 AM (EDT)
Firewall Uptime, Reimagined: How AIOps Simplifies Operations and Prevents OutagesTue 28 Oct 2025 @ 11:00 AM (EDT)
Under the Hood: CloudGuard Network Security for Google Cloud Network Security Integration - OverviewAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY