Hi Everybody,
I'm looking for input on something that I recently ran across. Our organization runs a lot of VPNs, both with external agencies and internally as well. Because all of our internal communication runs over VPNs, we have configured TCP MSS clamping for a few years now to try to combat fragmentation and slowdowns. From the documentation that we read, to do it in GuiDBEdit, you have to set the "fw_clamp_tcp_mss_control" value to True on the gateway object, and then set the "mss_value" on each of the individual interfaces of the gateway that have traffic going into/out of a VPN tunnel. We calculated the MSS value we thought would be appropriate, put it on the internal interfaces in GuiDBEdit, and it seems to have worked, we haven't had a lot of issues with fragmentation.
Jump forward to last week. We've had issues with getting high-throughput on a particular VPN that we use for SAN replication, so I've been playing around with different settings in the VPN community and on the two gateways. One thing that I did was to turn off IP compression in the VPN community. That about doubled the VPN throughput and pretty much fixed the problem in itself. We've always had IP compression turned on for our internal VPNs and never really thought about how it might be slowing them down. The other changes I made were to reset the MSS value on the SAN interfaces of the source and destination gateways to 0 in GuiDBEdit. I did that because we had turned down the MTU value on the SANs themselves to 1280, so I figured there shouldn't be any fragmentation in the VPN if the SANs were negotiating a lower MSS value on their own. I was then doing a "tcpdump -i [interface] -vvv | grep mss" from the CLI of the two gateways, and I found that the SANs were not starting with a lower MSS value themselves, they were still starting with 1460, but over the VPN tunnel, one or both of the gateways turned the MSS value down to 1387. We've never set the MSS value on any interfaces to 1387, so I'm not sure where that value came from, but it seems like an appropriate MSS value to account for all of the headers that get appended to VPN traffic, so it looked like one or both of the gateways were doing automatic TCP MSS adjustment on the traffic. I've highlighted in red below the change in initial MSS values in the new connection.
Source Gateway:
[sourceIP].64484 > [destinationIP].iscsi-target: Flags [S], cksum 0x754f (correct), seq 2506933430, win 65535, options [mss 1460,nop,wscale 2,nop,nop,TS val 0 ecr 0], length 0
[sourceIP].iscsi-target > [destinationIP].64484: Flags [S.], cksum 0x7fb9 (correct), seq 946781532, ack 2506933431, win 65535, options [mss 1387,nop,wscale 5,nop,nop,TS val 0 ecr 0], length 0
Destination Gateway:
[sourceIP].64484 > [destinationIP].iscsi-target: Flags [S], cksum 0x7598 (correct), seq 2506933430, win 65535, options [mss 1387,nop,wscale 2,nop,nop,TS val 0 ecr 0], length 0
[sourceIP].iscsi-target > [destinationIP].64484: Flags [S.], cksum 0x7fb9 (correct), seq 946781532, ack 2506933431, win 65535, options [mss 1387,nop,wscale 5,nop,nop,TS val 0 ecr 0], length 0
This has been a pipe dream of mine for a while to have the TCP MSS adjustment feature work automatically, where the gateways could automatically detect if a new TCP connection was going into a VPN and set the MSS value themselves to account for the VPN headers, and if the traffic isn't going into a VPN just leave the MSS value to whatever the source machine set. Based on the Check Point documentation we read on setting up the TCP MSS adjustment, we thought that if you left the mss_value at 0 the gateway would just not do MSS adjustment on that interface, so you had to set the mss_value to something other than 0, but I'm questioning if 0 actually means to set the MSS automatically. So, I have a few questions:
- Am I making this up, or is automatic TCP MSS adjustment an actual thing? If not, Check Point PLEASE make it so.
- Is this a new feature that was introduced in one of the R80 releases? On the two gateways where I saw this, one is running R80.40, the other is R80.30 3.10 kernel. I usually comb through the release notes and I've never seen anything about a new feature for automatic TCP MSS adjustment.
- If it's an actual feature, what are the conditions to how/when it works? Does it work on the encrypt (source gateway), decrypt (destination gateway), or both? Does it work with IP Compression turned on in the VPN community or does that need turned off in order to work?
- Does it work on the Gaia Embedded SMB (now Quantum Spark?) gateways? We have quite a few 1500 appliances, so if automatic TCP MSS adjustment is a reality, I really need it to work on them as well.
I'm looking forward to hearing everyone's feedback on this. Having automatic TCP MSS adjustment would make my life a whole lot easier than trying to figure out which interfaces need the mss_value set and going into GuiDBEdit every time we have a new interface, plus the interfaces always change when you replace a gateway and reset SIC, so then I have to go back in to GuiDBEdit and set the mss_value on all of the interfaces over again.
Thanks everyone!
Wilson