Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Jerry
Mentor
Mentor

"UDP checksum is incorrect" - 100s of IPv6 DROPs in fw.log - TP/IPS responsible?

hi folks

 

quick one

 

one of my customers just upgraded to R82 last night and found in fw.log 100s of drops due to "UDP checksum is incorrect".

knowing how UDP works I presume that TP/IPS is to blame but which protection is responsible for that? 

any clues?

Jerry
0 Kudos
89 Replies
DanCannon
Participant

Probably a customised version based on 4.8? R82 has been in the pipe line for a while so could account for why this is not "up to date"

0 Kudos
Jerry
Mentor
Mentor

wrong! nothing on my setup is "customised" nor non-standard I'm afraid. Plus NIC's are totally usual for 4x10G extension cards so ...

Jerry
(1)
DanCannon
Participant

I meant by checkpoint not you....

 

Jerry
Mentor
Mentor

why would that be the case Dan ? Sorry but I was under the impression that unless your run very sophisticated VSX or vSEC on either AWS or Azure etc. you may count on CP to provide custom build Gaia with Private HotFix's etc. I'm working with Check Point Software toys and tools since 1999 and frankly I don't believe that CP would be having any willing to provide a "custom build" to the LAB device running totally standard setup and nothing unusual within that setup" except ... no everyone can have 60G bond like not everyone can have 10G baer to the Internet.

Nonthenless please note my setup despite its NIC's errors works perfectly fine, meaning layer 4 vlans gives me all access required and Gaia R82 works just fine. This is just something confusing that rx checksum errors appear all the time and ifconfig looks awful when you check the increase of those counters ...

 

Cheers!

Jerry
0 Kudos
Jerry
Mentor
Mentor

CURRENT state of play folks:

Screenshot 2024-10-24 101216.png

Jerry
0 Kudos
Jerry
Mentor
Mentor

@shais @PhoneBoy @Chris_Atkinson @Hen_Hertz @HeikoAnkenbrand 

guys, would you be able to look into this and recommend some steps?

we'd highly appreciate that.

Cheers

Jerry
the_rock
Legend
Legend

@Jerry Just an idea...I had done this before for the lab appliance, but its possible they just made one time exception. What if you contact your local SE and ask them to open a TAC case about it? I mean, it is GA version now, you do have legitimate issue, so I hope they would be able to do this.

Andy

Chris_Atkinson
Employee Employee
Employee

@Jerry Please share "show asset network" output for the affected systems.

I would also do as @the_rock suggests and work with your SE to raise the SR, this is the prerequisite for TAC raising a task as requested by @shais 

CCSM R77/R80/ELITE
the_rock
Legend
Legend

The only reason why I mentioned that in the first place is because I learned from the past experiences with any fw vendor's SEs, if you do as much work as possible in the lab and show you tested so many things, it makes it wasy easier for them to go to TAC and raise the issue, so they dont get in trouble either.

Again, just my personal experience, but Im sure others would say the same.

Andy

0 Kudos
Jerry
Mentor
Mentor

Thanks a lot Andy, what really drives me confused here is that my layer 3 interfaces which are vlan interfaces in fact has no errors whatsoever only the physical once and the bond have contantly the errors ...

 

bond1.xx    Link encap:Ethernet  HWaddr 00:1C:7F:69:35:….

            inet addr:  Bcast:  Mask:255.255.255.248

            inet6 addr: fe80::21c:7fff:fe69:35bc/64 Scope:Link

            inet6 addr: ::1/64 Scope:Global

            UP BROADCAST RUNNING MULTICAST  MTU:9216  Metric:1

            RX packets:45129377 errors:0 dropped:0 overruns:0 frame:0

            TX packets:158694752 errors:0 dropped:1 overruns:0 carrier:0

            collisions:0 txqueuelen:1000

            RX bytes:12692876100 (11.8 GiB)  TX bytes:165629777672 (154.2 GiB)

 

bond1.xxx   Link encap:Ethernet  HWaddr 00:1C:7F:69:35:…….

            inet addr:…….. Bcast:.  Mask:255.255.255.248

            inet6 addr: fe80::21c:7fff:fe69:35b.......c/64 Scope:Link

            inet6 addr: :2/64 Scope:Global

            UP BROADCAST RUNNING MULTICAST  MTU:9216  Metric:1

            RX packets:179493414 errors:0 dropped:0 overruns:0 frame:0

            TX packets:65661620 errors:0 dropped:0 overruns:0 carrier:0

            collisions:0 txqueuelen:1000

            RX bytes:169597084828 (157.9 GiB)  TX bytes:15481615109 (14.4 GiB)

 

Jerry
0 Kudos
the_rock
Legend
Legend

Wait...what you sent shows zero errors on bond interfaces. Is this from today?

Anndy

0 Kudos
Jerry
Mentor
Mentor

it is from the time of my post mate !

only LACP bond shows errors plus Physical line card eth1 & eth2 subs shows them too

Jerry
0 Kudos
the_rock
Legend
Legend

K, got it now. Lets see if someone from Israel responds tomorrow, hopefully. Though I know their working week is Sunday-Thursday.

Andy

0 Kudos
Jerry
Mentor
Mentor

Thanks Chris, here is what you've asked for:

Number of line cards: 2
Line card 1 model: CPAC-4-10F-B
Line card 1 type: 4 ports 10GbE SFP+ Rev 2.0
Line card 2 model: CPAC-2-10F-B
Line card 2 type: 2 ports 1/10GbE SFP+ Rev 2.0

Re. contacting my SE ... that's pretty funny as usually I'm the local SE here in London plus, I think I can manage rising the SR with the TAC by providing details accordingly however, many times R&D was t-shooting with me just based on the Check Mates, I was having sessions with Max and other guys from R&D and also Dameon himself just because I found something odd within the code or I was having issues I've discovered myself in my LAB.

I do appreciate all what you Chris and Andy just wrote here to me within the past day (ps. I do not work on Sundays hence my late response) but I will do my best to rise the SR this upcoming week and see what I can do. Meanwhile I'd really appreciate if the remote session as Shais mentioned could be executed as I believe I'm not the only one using same line cards with still (48h after the kernel changes) having the output like this:

eth1-01 Link encap:Ethernet HWaddr 00:1C:7F:69:35:BC
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9216 Metric:1
RX packets:28025672 errors:1026 dropped:0 overruns:0 frame:0
TX packets:50260057 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:21604026930 (20.1 GiB) TX bytes:27115248171 (25.2 GiB)

eth1-02 Link encap:Ethernet HWaddr 00:1C:7F:69:35:BC
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9216 Metric:1
RX packets:58247965 errors:1056 dropped:0 overruns:0 frame:0
TX packets:29294958 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:40554860916 (37.7 GiB) TX bytes:24577395863 (22.8 GiB)

eth1-03 Link encap:Ethernet HWaddr 00:1C:7F:69:35:BC
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9216 Metric:1
RX packets:41292747 errors:985 dropped:0 overruns:0 frame:0
TX packets:32571941 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:38834661276 (36.1 GiB) TX bytes:24399133125 (22.7 GiB)

eth1-04 Link encap:Ethernet HWaddr 00:1C:7F:69:35:BC
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9216 Metric:1
RX packets:31904176 errors:1127 dropped:0 overruns:0 frame:0
TX packets:40098582 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:25592005559 (23.8 GiB) TX bytes:38347740796 (35.7 GiB)

eth2-01 Link encap:Ethernet HWaddr 00:1C:7F:69:35:BC
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9216 Metric:1
RX packets:34940552 errors:960 dropped:0 overruns:0 frame:0
TX packets:24319101 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:30260935093 (28.1 GiB) TX bytes:19319642529 (17.9 GiB)

eth2-02 Link encap:Ethernet HWaddr 00:1C:7F:69:35:BC
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9216 Metric:1
RX packets:32174494 errors:789 dropped:0 overruns:0 frame:0
TX packets:49503751 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:28595431480 (26.6 GiB) TX bytes:47316927330 (44.0 GiB)

Jerry
0 Kudos
the_rock
Legend
Legend

I hope someone from R&D can do remote with you.

Andy

Jerry
Mentor
Mentor

Thanks buddy I bet someone will but just the question is - when 🙂

Jerry
0 Kudos
Jerry
Mentor
Mentor

hi mate, so I've did dig a little bit within the past week and found that driver which supposed to support that line card:

Line card 1 model: CPAC-4-10F-B
Line card 1 type: 4 ports 10GbE SFP+ Rev 2.0

driver: ixgbe
version: 5.15.2 (V1.0.1_ckp)
firmware-version: 0x800000cb

 

Apparently Gaia R82 is failing this particular driver which runs this NIC/LINE CARD.

I do have on the same appliance another LINE CARD:

Line card 2 model: CPAC-2-10F-B
Line card 2 type: 2 ports 1/10GbE SFP+ Rev 2.0

 

this one works pefrectly fine and DOES NOT SHOW any signs of an issue between kernel and driver (on driver level and rx errors).

 

Conclusion - is there ANYONE who can try or at least HELP and take care of this issue for the benefit of ALL around the world who uses CPAC-4-10F-B and MAY or WILL have same story sooner or later?

 

I'd apprecaite ANY effort as it seems it isn't a big deal for anyone at CP ... 😞 

Jerry
(1)
the_rock
Legend
Legend

Man, I really hope someone in CP takes interest in this, as I agree 100%, it can help a LOT of people. I will keep researching on my end to see if there is anything I can find for you.

Always a pleasure m8.

Andy

the_rock
Legend
Legend

Hey bro,

What is exact model you are testing this with? I contacted one of my colleagues, he is Linux genius, lets see if we can figure something out...

Andy

Timothy_Hall
Legend Legend
Legend

Here are the lengthy code diffs between the ixgbe driver used for R81.20 vs. R82:

https://github.com/intel/ethernet-linux-ixgbe/compare/v5.3.7...v5.15.2

After doing some keyword searches and reading through the change summaries I don't see a smoking gun.  Other than adding support for new kernels and "various bug fixes" the only remarks I saw that could be even tangentially related to this issue are:

  • Changed report aggregate of all receive errors using the existing receive error counter
  • Fix ethtool stats reporting

May want to have your Linux wizard look this over @the_rock.  Still think this is probably a cosmetic issue for "garbage" traffic that the firewall can't/won't process anyway.

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
the_rock
Legend
Legend

Thanks @Timothy_Hall , will do, lets see what he says.

Andy

0 Kudos
Jerry
Mentor
Mentor

Cheers Tim, I was told just few hours ago that my case went off to the R&D via SR and will be dealth with shorlty. Hope it helps as I do beleive that this "driver" need to be fixed anyway. Layer 3 traffic works just fine (as I type here) but those RX ERRORS are really on lowest lever I have no access to hence I'm doing all in my power to influence the R&D processings and thanks to my good friend from CP (yes Ron, this is about you buddy!) who used to work with me and still does great job in PreSales) - this went off to R&D just today. FIngers crossed chaps, let's see what the outcome will be hope sooner rather than later.

 

Cheers and thanks for everyone who cares!

Jerry
0 Kudos
Jerry
Mentor
Mentor

cp> show asset system
Platform: PH-30-00
Model: Check Point 15600
Serial Number:
CPU Model: Intel(R) Xeon(R) CPU E5-2630 v3
CPU Frequency: 2399.974 Mhz
Number of Cores: 32
CPU Hyperthreading: Enabled

Jerry
0 Kudos
Jerry
Mentor
Mentor

one more thing @Chris_Atkinson here it comes:

Line cards
Model: CPAC-2-10F-B Type: 2 ports 1/10GbE SFP+ Rev 2.0
Model: CPAC-4-10F-B Type: 4 ports 10GbE SFP+ Rev 2.0

eth1-01
Identifier : 0x03 (SFP)
Transceiver type : 10G Ethernet: 10G Base-SR
Length (50um) : 80m
Length (62.5um) : 30m
Length (OM3) : 300m
Vendor name : FINISAR CORP.
Vendor PN : FTLX8571D3BCV-CP
Vendor SN : MX827RH

SKU auto-detect failed. Manually retrieve SKU with this command:
grep SKU /etc/hcp/tests/system/hardware/transceiver/hcp_optic_info.json|grep -v "TBD"

eth1-02
Identifier : 0x03 (SFP)
Transceiver type : 10G Ethernet: 10G Base-SR
Length (50um) : 80m
Length (62.5um) : 30m
Length (OM3) : 300m
Vendor name : FINISAR CORP.
Vendor PN : FTLX8571D3BCV-CP
Vendor SN : MX8274H

SKU auto-detect failed. Manually retrieve SKU with this command:
grep SKU /etc/hcp/tests/system/hardware/transceiver/hcp_optic_info.json|grep -v "TBD"

eth1-03
Identifier : 0x03 (SFP)
Transceiver type : 10G Ethernet: 10G Base-SR
Length (50um) : 80m
Length (62.5um) : 30m
Length (OM3) : 300m
Vendor name : FINISAR CORP.
Vendor PN : FTLX8571D3BCV-CP
Vendor SN : MX901QG

SKU auto-detect failed. Manually retrieve SKU with this command:
grep SKU /etc/hcp/tests/system/hardware/transceiver/hcp_optic_info.json|grep -v "TBD"

eth1-04
Identifier : 0x03 (SFP)
Transceiver type : 10G Ethernet: 10G Base-SR
Length (50um) : 80m
Length (62.5um) : 30m
Length (OM3) : 300m
Vendor name : FINISAR CORP.
Vendor PN : FTLX8571D3BCV-CP
Vendor SN : MX823RX

SKU auto-detect failed. Manually retrieve SKU with this command:
grep SKU /etc/hcp/tests/system/hardware/transceiver/hcp_optic_info.json|grep -v "TBD"

eth2-01
Identifier : 0x03 (SFP)
Transceiver type : 10G Ethernet: 10G Base-SR
Length (50um) : 80m
Length (62.5um) : 30m
Length (OM3) : 300m
Vendor name : FINISAR CORP.
Vendor PN : FTLX8571D3BCV-CK
Vendor SN : AS91FSR

SKU auto-detect failed. Manually retrieve SKU with this command:
grep SKU /etc/hcp/tests/system/hardware/transceiver/hcp_optic_info.json|grep -v "TBD"

eth2-02
Identifier : 0x03 (SFP)
Transceiver type : 10G Ethernet: 10G Base-SR
Length (50um) : 80m
Length (62.5um) : 30m
Length (OM3) : 300m
Vendor name : FINISAR CORP.
Vendor PN : FTLX8571D3BCV-CK
Vendor SN : AS91FGJ

Jerry
0 Kudos
Jerry
Mentor
Mentor

after 48h of no-errors ... here we go again 😞 

 

lovely isn't it folks?lovely isn't it folks?

Jerry
0 Kudos
Jerry
Mentor
Mentor

folks, here is the picture after reboot (seems Tim's command did the trick, at least for now, I'll keep watching the counters):

Screenshot 2024-10-23 161012-1.png

Jerry
0 Kudos
Jerry
Mentor
Mentor

wonder why Gaia R82 is having ethtool version 4.8 whilst on the market that very same tool is at the moment 5.16 at least ...

Jerry
0 Kudos
Jerry
Mentor
Mentor

hi Shais, any updates from the R&D do you happen to know or can you check with them please?

I've beeb told 2 weeks ago that R&D is now taking care of that issue on the "drivers level".

Haven't heard anything back since then. Would be awesome to know if there is some viable solution ....

Cheers

Jerry
0 Kudos
PhoneBoy
Admin
Admin

Since it would be an update to drivers, I assume it would have to come via a hotfix (Jumbo or otherwise).
Haven't seen jumbos for R82 yet.

the_rock
Legend
Legend

I heard hopefully by mid January, so lets hope thats indeed true 🙂

Andy

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events