Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Duane_Toler
Advisor

Remote Access with multiple external interfaces - topology download

Yep, yet another post about Remote Access VPN with multiple external interfaces.  No, not ISP redundancy.  Yes, I did the Link Selection;  Yes, I did the checkbox in the Office Mode section; Yes, I read the VPN guide documentation; Yes, I did the SK digging, reading, Check Mates posts, and even put it in a VM test, R80.30 with JHF 236.  Yes, it actually works for *IPsec and NAT-T* packets.

However, the problem is port 443 for site update and topology download.  That part doesn't work. The client starts to connect, VPN debug gets all the way to Main Mode packet 4, but fails at the XAuth step.  Yes I have the user defined, in the right group, and when I set the gateway default route to face back to my client, it fully connects just fine.

Topology-wise, I have eth0 as my "internal network", eth1 is my regular default gateway out, and eth2 is my "other" external interface (without the default route).   I'm trying to connect to the 2nd external interface on eth2. The breakdown is when eth1 is default gateway.  The firewall is writing packets with source IP of eth2 (my connecting interface, which is the interface I chose in Link Selection "use this IP from the topology table".  I tried the GUIDBedit hacks, but that didn't work either. 

When the client makes TCP SYN connection to firewall at eth2 on port 443, the gateway replies out eth1 (with eth2's IP as source) back to the client with TCP SYN-ACK.

So I did a nasty stupid thing, and put in a Linux VM as a Test, made it a router back to my client (yep it's stupid, and yes I got martians, but I disabled the rp_filter on it, and yes it's asymmetric routing, but packets got back to my client, albeit very circular; i don't care, it worked).   Client now gets the TCP SYN-ACK, replies with TCP ACK, and topology gets downloaded, and client authenticates.

From that point onward, connections to the VPN domain are just IPsec NAT-T.  THIS part works correctly; the firewall gets inbound IPsec NAT-T connection on eth2, it responds on eth2 (the fw ctl zdebug is spooky, too; it's writing packets on eth2 to the next-hop MAC address that sent the packet originally; even tho the default gateway is eth1! nice trick!).

For site-to-site IPsec VPN, everything would work normally and correctly from the start; none of the above is a problem.  However, it's a problem for Remote Access VPN.  Without doing ISP redundancy (ick; this config is going to be applied later to an HA cluster), I can't see how this is going to work.  I tried the probing methods, too, and those didn't work. 

Has anyone ever gotten this to work?  The key here is the default gateway is NOT the interface on which connections would terminate.  Again, the issue is with port 443 topo/xauth, not IPsec.

0 Kudos
28 Replies
PhoneBoy
Admin
Admin

So is there a route back to your client on the gateway through eth2?
Or is it on the same subnet?

0 Kudos
Duane_Toler
Advisor

No, not anymore, and it normally wouldn't be (the client reach the gateway through normal routing).  The gateway *could* reach the client back on eth2 (and it does for IPsec and NAT-T packets, by writing a packet with destination MAC address of the host that sent it).  Normally you'd think "well, silly-goose, that's your problem" (and normally I'd agree).  For a real-live gateway with multiple external interfaces (and no dynamic routing), one static default gateway (out eth1), the other external interface won't have a default route (because.... default, by definition).  This 2nd external interface is where VPN connections will terminate.

But again, it works for IPsec and NAT-T. The new packet on the wire is the MAC of the ingress next-hop that sent the frame (I even see this MAC address tracked in the zdebug with "-m VPN + all"; that's really clever!)

 

Here, the host Hades is *my* local LAN router, not the Check Point firewall (eth2 just happens to be the same).  10.0.3.236 is my client.  10.233.31.80 is the firewall's eth2.    Hades eth2 MAC addr ends in "b0:4b".  Hades is between my client and the firewall. 

[root@hades ~]# ifconfig eth2

eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        inet 10.233.31.30  netmask 255.255.255.0  broadcast 10.233.31.255

        ether 00:0c:29:18:b0:4b  txqueuelen 1000  (Ethernet)

[root@hades ~]# tcpdump -nni eth2 host 10.0.3.236 -e

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes

15:01:27.652944 00:0c:29:18:b0:4b > 00:0c:29:7a:76:63, ethertype IPv4 (0x0800), length 118: 10.0.3.236.54508 > 10.233.31.80.4500: UDP-encap: ESP(spi=0x1120e857,seq=0x31), length 76

15:01:27.677150 00:0c:29:7a:76:63 > 00:0c:29:18:b0:4b, ethertype IPv4 (0x0800), length 118: 10.233.31.80.4500 > 10.0.3.236.54508: UDP-encap: ESP(spi=0x59f1983d,seq=0x23), length 76

 

When it came to topology and xauth... crickets (well, as far as "doing the right thing" is concerned): [this was earlier in the day, hence the time difference]

[det@hades ~]$ sudo tcpdump -nni eth2 host 10.0.3.236

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes

10:46:08.183726 IP 10.0.3.236.49674 > 10.233.31.80.443: Flags [S], seq 3336228268, win 65535, options [mss 1460,nop,wscale 5,nop,nop,TS val 875756382 ecr 0,sackOK,eol], length 0

10:46:12.186856 IP 10.0.3.236.49674 > 10.233.31.80.443: Flags [S], seq 3336228268, win 65535, options [mss 1460,nop,wscale 5,nop,nop,TS val 875760382 ecr 0,sackOK,eol], length 0

10:46:12.957761 IP 10.0.3.236.49676 > 10.233.31.80.443: Flags [S], seq 2415593851, win 65535, options [mss 1460,nop,wscale 5,nop,nop,TS val 875761152 ecr 0,sackOK,eol], length 0

>>> here is where i noticed that NAT-T was actually working from the beginning:

10:46:12.960337 IP 10.0.3.236.55873 > 10.233.31.80.4500: NONESP-encap: isakmp:

10:46:12.977206 IP 10.233.31.80.4500 > 10.0.3.236.55873: NONESP-encap: isakmp:

>> but more crickets for XAuth/topology:

10:46:13.031651 IP 10.0.3.236.49676 > 10.233.31.80.443: Flags [S], seq 2415593851, win 65535, options [mss 1460,nop,wscale 5,nop,nop,TS val 875761223 ecr 0,sackOK,eol], length 0

10:46:13.066552 IP 10.0.3.236.49676 > 10.233.31.80.443: Flags [S], seq 2415593851, win 65535, options [mss 1460,nop,wscale 5,nop,nop,TS val 875761253 ecr 0,sackOK,eol], length 0

10:46:13.099089 IP 10.0.3.236.49676 > 10.233.31.80.443: Flags [S], seq 2415593851, win 65535, options [mss 1460,nop,wscale 5,nop,nop,TS val 875761283 ecr 0,sackOK,eol], length 0

10:46:13.128996 IP 10.0.3.236.49676 > 10.233.31.80.443: Flags [S], seq 2415593851, win 65535, options [mss 1460,nop,wscale 5,nop,nop,TS val 875761313 ecr 0,sackOK,eol], length 0

10:46:18.587032 IP 10.0.3.236.64964 > 10.233.31.80.4500: NONESP-encap: isakmp: child_sa  #67[IVR]

10:46:18.588131 IP 10.233.31.80.4500 > 10.0.3.236.64964: NONESP-encap: isakmp:

10:46:20.250705 IP 10.0.3.236.64964 > 10.233.31.80.4500: NONESP-encap: isakmp: phase 1 I ident

10:46:20.252367 IP 10.233.31.80.4500 > 10.0.3.236.64964: NONESP-encap: isakmp: phase 1 R ident

10:46:20.252995 IP 10.0.3.236.49674 > 10.233.31.80.443: Flags [S], seq 3336228268, win 65535, options [mss 1460,nop,wscale 5,nop,nop,TS val 875768383 ecr 0,sackOK,eol], length 0

10:46:20.259071 IP 10.0.3.236.64964 > 10.233.31.80.4500: NONESP-encap: isakmp: phase 1 I ident

10:46:20.306897 IP 10.233.31.80.4500 > 10.0.3.236.64964: NONESP-encap: isakmp: phase 1 R ident

(of course ISAKMP couldn't get far because xauth never completed...)

 

Sooooo, das boog?

 

0 Kudos
PhoneBoy
Admin
Admin

Ok, now I get it. 
Might be a bug, might also be intended behavior.
Probably requires a TAC case to be sure.

0 Kudos
Duane_Toler
Advisor

Aww boo 😞

 

I opened a case for it. 😕 

 

I read through the "-m VPN + all" debug again and I see another section where, for NAT-T, it does the force-reroute of the packet out the Link Selection non-default interface (correctly).  However, there's nothing close to that for the port 443 packets.  I'm guessing those packets are handed off to the FW worker instance for authentication, etc.  I ran a kdebug of "-m fw" and see where it's choosing to write packets per the routing table (even though I have that option set to "reply from same interface").

 

So..., case open, debug info sent along with it. Now I wait. 🙂

0 Kudos
the_rock
Legend
Legend

Ok, just an idea...what does it show if you run ip route get and then IP address you are want to check on the firewall itself?

0 Kudos
Duane_Toler
Advisor

I see the addresses in the output (I presume this is what you mean):

 

local 10.233.31.80 dev lo  src 10.233.31.80 

    cache <local>  mtu 16436 advmss 16396 hoplimit 64

 

No that IP is not on the Loopback device; I believe the route entry here just means to accept and process connections to that address (and any others locally-connected).

0 Kudos
MartinTzvetanov
Advisor

Since which JHF did you start observing this issue? I'm pretty sure I hit the same issue trying to patch to 236 on clusters with multiple external interfaces, but didn't have enough time to dig into the traffic.

0 Kudos
Duane_Toler
Advisor

I haven't tried this configuration on an earlier version.  I was on R80.30 JHF 228 and it didn't work; updated to R80.30 JHF 236 and still didn't work.  I have a TAC case open and I'm going to follow-up on that either this afternoon or tomorrow (time-permitting).

0 Kudos
MartinTzvetanov
Advisor

In my situation it was working with 191 and after patch to 228 or 236 it broke.

0 Kudos
Duane_Toler
Advisor

Oh wow, that's "exciting".  I only had "strong belief" from earlier versions (e.g.: R77.30, i mean) that it worked "correctly", even tho I never independently verified and monitored it.  I also knew it worked with ISP Redundancy enabled, but admittedly I never closely monitored the exact path.  Way-back-when in R75.40, I also had it working and tested failover/outages, but that also was with ISP Redundancy.  So for me, this specific configuration is wholly-untested, I admit.  Good to know that you had it working in JHF 191, too.

 

I need to test it with R80.20 this week; it's on my short-list for the week now, so I'll find out in a few days when I fire up a VM.

0 Kudos
MartinTzvetanov
Advisor

I don't have ISP redundancy, but have 2 ISPs and BGP routing and total 10 external interfaces.

0 Kudos
Duane_Toler
Advisor

Sounds similar to what I have here, except not 10 ext. interfaces. 🙂  Only 2 here, but also no ISP redundancy involved.  I'll update the thread with what I find.  My TAC case is now "assigned" to someone, so I'm waiting on that person to review and write back.

0 Kudos
the_rock
Legend
Legend

Yes, please let us know what they say, Im very curious. Honestly, cant say I ever seen this issue in R80.40 version.

0 Kudos
Duane_Toler
Advisor

I got an R80.20 VM with JHF 183 [matching a customer's install right now].  This version works as I initially described, which is "it doesn't work without asymmetric routing".  😕   I'm going to step through JHF 187 next, then 190 after that to see if either of those work.

After that, I'll look into updating to R80.30 with the JHF 191 to see if it works there, to match what you said [Martin].

As for my TAC case, today was crickets (but I also was busy myself elsewhere and did't have a good opportunity to call in).  My "need" from TAC right now, tho, is to review the info and ask within about what is needed to make this capability work correctly (versus a phone call to re-state everything, which is annoying).

0 Kudos
MartinTzvetanov
Advisor

My customer's scenario has been working since R75.x until R80.30 JHF 236. It's obvious that it's related to the topology and something else specific but now sure if I could have the opportunity to dig in it.

0 Kudos
Duane_Toler
Advisor

Yeah R75-ish is when I last actively used this myself.  I just had an interesting effect as I was slowly adjusting the knobs.  I turned the gateway into "probing method" for everything (see screenshot).  In CLISH, I had the interfaces in their natural state (main address having the default route, not the Link Selection interface).  I connected the client [yeah still asymmetric routing here] to update topology. After that, the client immediately rolled over to Visitor Mode only (no IPsec, no NAT-T; just TCP 443 only), and the client changed its own configuration to be that of Main Address in the gateway properties! NOT one of the Link Selection external interfaces... whoa... wicked.

I then downed the "Main Address" interface [with the default route on it], but the client did not roll back over to the Link Selection interface.  It's stubbornly only connecting to the Main Address, completely ignoring Link Selection and probing.   So it still doesn't behave as it should.  

*For Me*, this is what I want; to be able to "move" the clients to a new Main Address without pushing out new client configurations.  Regardless, Link Selection still isn't working.

I'll keep poking before updating HFAs.  I want to get an idea of how this is behaving before changing conditions. 🙂 

0 Kudos
Duane_Toler
Advisor

Well now.... here in R80.20 JHF 187/188 is the same broken behavior as JHF 183.  Jumbo 190 things changed slightly, but still not correct.

Now, when it's all probing method, the client does NAT-T to the gateway on the Main Address interface instead of Visitor Mode.  That much is slightly better.  Regardless, the client never rolls over to a Link Selection interface.  I even went through GUIDBedit a bit, as in sk32229, but that didn't change anything.  TAC also came back with that same SK, but it doesn't seem to have any effect in any way.

Other than NAT-T instead of Visitor Mode, the remaining behavior is still the same broken behavior. 😕

I'll try R80.20 JHF 202 (ongoing take) just for fun, but then I'll take it to R80.30 < JHF 191 to see how that goes.

0 Kudos
Wolfgang
Authority
Authority

Hello folks,

I read this post and maybe I don‘t understand, but the described behaviour looks normal to me. Let me explain…

If a client connects to your gateway to the IP address of eth2 the answering packet will be routed back via the default route or via a specific route for the client IP. If the default route is going through eth1 then this is normal behaviour. 
The solution for this is ISP redundancy, with this feature enabled an answer packet is routed back through the same incoming interface.

That‘s my understanding. Now the VPN part…. You can follow link selection for VPN and define every available interface as destination and the choice of the route back and source IP needs to be configured. Normally these apply to remote access VPN.

Sometimes you want to have different settings for site2site and remote access vpn. This can be enabled via GUIdbedit following

Link Selection for Remote Clients 

Set „apply_resolving_mechanism_to_SR“ to false and with the setting of „ip_resolution_mechanism“ you can define all your needed interfaces.

Don‘t forget to set „Support connectivity enhancement for gateways with multiple external interface“ in the office mode section.

0 Kudos
Duane_Toler
Advisor

The problem is that the gateway isn't responding to XAUTH and topology downloads on any interface except either A) the main address (such as when probing method is used, as I just found), or B) asymmetrically with combination of a Link Selection interface and the non-LS interface with default route.  NAT-T traffic is being emitted correctly on the Link Selection interface with disregard to the default route (and this is the expected and desired behavior; link-level re-writing of packets).  Instead, the gateway is writing packets out the default route interface, yet with the source IP of the chosen Link Selection interface. 😕  That's worse; it fools you into thinking "it works" until you down the default-route interface (with the Main address).  Then it doesn't work at all, unless you have a floating static route active.

 

The correct behavior is that the gateway should only respond to the connection in and out of the chosen Link Selection interface and it shouldn't need ISP redundancy to do it (although ISP Redundancy does lean on probing and the cpisp_update script to override the default route).  This worked before R80; both Martin and I had it working. 

That SK appears to have no effect now, either.  I just went through it with several values in GUIDBedit and none of them made the client work as expected.

0 Kudos
Wolfgang
Authority
Authority

Thanks @Duane_Toler , now I understand. We are using a similar environment. A gateway with two external interface and both external interface are each behind another firewall vendor NATed. With the known setting via GUIdbedit we configured the real external public IPs for the remote access clients. This is working without problems but we had two default routes with different priorities to overcome the loss of the default route. I never checked the way the packets flows. 

Any news from TAC regarding this issue?

0 Kudos
Duane_Toler
Advisor

No, nothing useful from TAC yet. Just asking for more info and "a screenshot showing it from the wrong interface". I collected that and the usual bundle of cpinfo, kdbebug, and vpn debug showing the error.  The kdebug for "-m VPN + all" shows the NAT-T packets being "rerouted" to the correct interface [it literally says that].  "forceThroughif" #2, which is the Link Selection interface.  This is the really cool part, although curiously, I don't see messages about tracking the next-hop MAC address, like I did earlier.  Huh.

 

@;391850;30Jun2021 14:50:33.449947;[cpu_3];[fw4_0];IKE_Handling_Outbound: =======>>> Outgoing packet;

@;391850;30Jun2021 14:50:33.449949;[cpu_3];[fw4_0];IKE_Utils_FillIKEInfo: outgoing_single_IP = 10.233.31.80 and peer_IP = 10.0.3.236;

@;391850;30Jun2021 14:50:33.449952;[cpu_3];[fw4_0];IKE_Utils_FillIKEInfo: pIKEInfo printout for cookie = <I:[927cc8f6, 914d9f9e], R:[8b9edae6, a3002ee7]>;

@;391850;30Jun2021 14:50:33.449953;[cpu_3];[fw4_0];IKE_Utils_FillIKEInfo:            pIKEInfo->m_stateRestored = 1;

@;391850;30Jun2021 14:50:33.449955;[cpu_3];[fw4_0];IKE_Utils_FillIKEInfo:            pIKEInfo->m_myAddress = 10.254.0.1;

@;391850;30Jun2021 14:50:33.449956;[cpu_3];[fw4_0];IKE_Utils_FillIKEInfo:            pIKEInfo->m_myPort = 4500;

@;391850;30Jun2021 14:50:33.449958;[cpu_3];[fw4_0];IKE_Utils_FillIKEInfo:            pIKEInfo->m_peerAddress = 10.0.3.236;

@;391850;30Jun2021 14:50:33.449959;[cpu_3];[fw4_0];IKE_Utils_FillIKEInfo:            pIKEInfo->m_peerPort = 60206;

@;391850;30Jun2021 14:50:33.449960;[cpu_3];[fw4_0];IKE_Utils_FillIKEInfo:            pIKEInfo->m_throughIf = 3;

@;391850;30Jun2021 14:50:33.449962;[cpu_3];[fw4_0];IKE_Utils_FillIKEInfo:            pIKEInfo->m_NATT_probing = 0;

@;391850;30Jun2021 14:50:33.449964;[cpu_3];[fw4_0];IKE_Handling_Outbound: Restoring state for IKE -> peer = 10.0.3.236, my = 10.254.0.1;

@;391850;30Jun2021 14:50:33.449968;[cpu_3];[fw4_0];IKE_State_RestoreState: First Chance => State restored for IKE packet with cookie = <I:[927cc8f6, 914d9f9e], R[8b9edae6, a3002ee7]>;

@;391850;30Jun2021 14:50:33.449969;[cpu_3];[fw4_0];IKE_Flags_Control_Outbound_SourceAddress: Will use the restored address;

@;391850;30Jun2021 14:50:33.449971;[cpu_3];[fw4_0];IKE_Flags_Control_Outbound_SourceAddress: Will set source address to be 10.233.31.80;

@;391850;30Jun2021 14:50:33.449972;[cpu_3];[fw4_0];IKE_Flags_Control_Outbound_ForcingRoute: Will  reroute the packet (1);

@;391850;30Jun2021 14:50:33.449974;[cpu_3];[fw4_0];PacketRoute_Init: pPacketRoute printout;

@;391850;30Jun2021 14:50:33.449975;[cpu_3];[fw4_0];PacketRoute_Init:       pPacketRoute->m_reroutePacket: Yes;

@;391850;30Jun2021 14:50:33.449977;[cpu_3];[fw4_0];PacketRoute_Init:       pPacketRoute->m_pChain: ffffc200346df878;

@;391850;30Jun2021 14:50:33.449979;[cpu_3];[fw4_0];PacketRoute_Init:       pPacketRoute->m_kRouteOpaque: ba63f000;

@;391850;30Jun2021 14:50:33.449981;[cpu_3];[fw4_0];PacketRoute_Init:       pPacketRoute->m_oldSourceAddress: 10.254.0.1;

@;391850;30Jun2021 14:50:33.449982;[cpu_3];[fw4_0];PacketRoute_Init:       pPacketRoute->m_newSourceAddress: 10.233.31.80;

@;391850;30Jun2021 14:50:33.449984;[cpu_3];[fw4_0];PacketRoute_Init:       pPacketRoute->m_protocol: 17;

@;391850;30Jun2021 14:50:33.449985;[cpu_3];[fw4_0];PacketRoute_Init:       pPacketRoute->m_forceThroughIf: 2;

@;391850;30Jun2021 14:50:33.449987;[cpu_3];[fw4_0];PacketRoute_Init:       pPacketRoute->m_actualThroughIf: 3;

@;391850;30Jun2021 14:50:33.449988;[cpu_3];[fw4_0];PacketRoute_Init:       pPacketRoute->m_nextHopIP: 0.0.0.0;

@;391850;30Jun2021 14:50:33.449990;[cpu_3];[fw4_0];PacketRoute_Init:       pPacketRoute->m_destinationAddress: 10.0.3.236;

 

But for the port 443 stuff, no such thing:

  

Packet comes in on "interface #2" (my Link Selection interface):

@;391804;30Jun2021 14:50:32.375689;[cpu_2];[fw4_1];Before POST VM: <dir 0, 10.0.3.236:49526 -> 10.233.31.80:443 IPP 6> (len=52) TCP flags=0x10 (ACK), seq=2359198778, ack=2905463810, data end=

2359198778 (ifn=2) (first seen) (looked up) ;

  

But the reply leaves on "interface #3" (the Main address, with default route):

@;391804;30Jun2021 14:50:32.375822;[cpu_2];[fw4_1];Before VM: <dir 1, 10.233.31.80:443 -> 10.0.3.236:49526 IPP 6> (len=52) TCP flags=0x10 (ACK), seq=2905463810, ack=2359198778, data end=29054

63810 (ifn=3) (first seen) (looked up) ;

 

We're seeing 2 different processes in action here, right?  VPND and fw kernel?  VPND consumes the packet on port 443, but fwk (with help of SecureXL) handles the NAT-T traffic, right?

 

0 Kudos
Duane_Toler
Advisor

Well, my last message appears to not be here; dunno if it got zapped when my browser nearly crashed.  Oh well.

 

I just got an unsolicited message from a TAC manager that my case is being automatically escalated and transferred within the VPN team.  oh boy.  Must've indeed found a bug. 🙂  I uploaded cpinfo, VPN debug, and fw kdebug for them.

0 Kudos
Duane_Toler
Advisor

Alrighty, here's the answer, direct from R&D via TAC Tier3:

* It's not _quite_ a bug, but yeah R&D knows about this...  

* XAUTH/topology is intended to work only on the "default route" interface.   All these effects seen so far are just "side effects". XAUTH/topology in whichever process (vpnd, IIRC) will not respond out any other interface, regardless of Link Selection.

* This is strictly a gateway-side problem, not a client-side issue.

* NAT-T does follow Link Selection and works just as we expect/intend

* If enough of us call TAC and request this to be fixed, they will.  They have the hooks already there (as we see in GUIDBedit), but the code isn't doing it.  This almost worked back in the R65 era, but they didn't finish the work.  Many of the options we see in GUIDBedit just aren't "active", no matter how much you configure it. 🙂

 

So, there we have it.  It's not a great answer, but that's where it stands.  Whatever we had seen working previously had be to something involving asymmetric routing or the default route being moved when we weren't looking. 😕

0 Kudos
Duane_Toler
Advisor

R80.20 JHF 202 was the same as 190.  Just went to R80.30 (no JHF yet).  Funny, it had the client switch to Visitor Mode just like R80.20 JHF <190.  🙂  But that does make some sense, as "JHF 0" is based on an older R80.20 JHF.  I'm going to do R80.30 JHF 155 which was the last GA before 191, to see if that works. 

@MartinTzvetanov did I understand correctly, that you say this worked in R80.30 JHF 191?

0 Kudos
Duane_Toler
Advisor

Well, I thought I had something, but nope.  R80.30 JHF 155 still doesn't do the right things.  Even going through GUIDBedit for the RA link selection options still doesn't make it go.  Sheesh.  As an irritant, the probing method options still use Visitor Mode, so that patch still isn't integrated.  JHF 191 appears to have been pulled quietly by Check Point; it's not available for import anymore.  I got JHF 180, the closest they had.  I'll do that next.  This is quite a bug...

More annoying, now, is that I've managed to trip some other stupid bug in the Endpoint VPN client such that it won't upgrade or uninstall itself (to test any possible client-side errors).  "Error 26702"; and before anyone asks, no I'm not on E81.10 or older; I'm on E84.10 which is past that issue.  What a mess. 😕 

0 Kudos
MartinTzvetanov
Advisor

Hello,

It worked fine under R75.x, 77.x and 80.30 JHF191, after this JHF it brokes. There is no ISP redundancy, even the "Support connectivity enhancement for gateways with multiple external interface" is not set, it worked fine until JHF191. I'll insist to have a maintenance window so I can dig into the packet flow in JHF191 and later.

0 Kudos
Duane_Toler
Advisor

I got my VPN client updated to E84.50 now, but with JHF 155 there's still no change.  I've adjusted every combination of option in GUIDBedit as well, but no luck.  Despite my Windows account being in the local Administrators group, I still had to login as the local Administrator user to uninstall the client. Never had to do that before.  It's done now, tho. I'm installing JHF 180 to see if that fixes anything.

0 Kudos
Kryten
Collaborator

Sorrry to unearth this, but I've ran into the exact same Problem and wonder if there is a solution by now.

Were using R80.40 JHF Take 139 and get the same results: Link Selection settings are ignored by Remote Access, the Gateway still sends the responses via the main IP.
Using ISP redundancy is not really an Option here, as we have a lot of VPNs (S2S, RA, MA) and we want to move them piece by piece to the newer second external connection.

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events