- Products
- Learn
- Local User Groups
- Partners
- More
AI Security Masters E7:
How CPR Broke ChatGPT's Isolation and What It Means for You
Call For Papers
Your Expertise. Our Stage
Good, Better, Best:
Prioritizing Defenses Against Credential Abuse
Ink Dragon: A Major Nation-State Campaign
Watch HereCheckMates Go:
CheckMates Fest
Hi CheckMates,
public_IP/26. The carrier will send the whole /26 to one next-hop; they do not want to create a second sub-interface / VLAN.| CSR↔VSX hand-off | Two VLANs • VLAN 100 → VR (routes /26 to VS1-VSn) • VLAN 101 → VS0 (/32 for admin VPN) | One VLAN only • VR owns the whole /26, including the /32 for admin VPN |
| Admin remote-access | Terminates on VS0 over VLAN 101 | Has to terminate on VS0, but through VR over same VLAN |
| Status | Works fine | Warp link created between VR and VS0, but traffic never reaches VS0; cannot pick VS0 as next-hop for the /32 route |
The two diagrams are attached for clarity.
<VS0-public>/32 → VS0 inside the VR.set static-route) complains it is an “invalid next hop”.Has anyone solved a similar “single /26 – need admin VPN on VS0” constraint before?
Appreciate any pointers, even if the answer is “you really do need the second VLAN”.
Thanks!
Not sure this is possible in Legacy VSX.
VSnext (available in R82+) might support this since the routing is configured directly in Gaia OS versus in SmartConsole.
Thanks, PhoneBoy.
That matches what I’m seeing in the docs: in R81.20 “legacy” VSX only VS0’s interface can talk to the Management Server; management traffic can’t be hair-pinned through a member VS or a VR. The guide is explicit — “Only VS0 can communicate directly with the Management Server” and non-DMI designs are deprecated .
Because of that restriction, a single /26 delivered to a VR can’t also terminate the Remote-Access VPN on VS0 without some extra L2/L3 breakout (second VLAN, loopback, etc.).
I’ll lab the same design on an R82 “VSnext” image where routing is native in Gaia and report back.
There is no need to have network connectivity between VS0 and the member VSs in order to manage them in legacy VSX, all network comms between the VSs and the management servers go to/from the VS0 IP address that seems to be on Eth0 in your diagram. With that said, what is the problem you are trying to solve here?
The issue isn’t the VS → SMS traffic (that always egresses via VS0, no problem).
The obstacle is the administrator’s path into VS0:
With only one interconnect (VR ↔ CSR) VS0 never sees the VPN’s public /32, so IKE negotiations fail.
If I add a second VLAN dedicated to VS0 the VPN works, but the customer wants to avoid creating that extra segment. I haven’t found a legacy-VSX workaround (no loopback /32, proxy-ARP, or similar trick), hence the question.
NOTE: The following is an incredibly bad idea and should never be done in a real environment!
It's technically possible using dynamic routing. You have VS0 distribute a /32 route for itself, and you have the router instance pick it up. That said, in most topologies where I have seen this requested, it would be an existential threat to the environment. If any change to the router goes sideways, you could lose access to the box, and might not be able to restore it.
You could similarly connect both the router and VS0 to a switch context, and give them the public addresses on the warps. This does turn the public block into a broadcast domain, so you lose the use of some addresses.
Hi Bob,
Thanks for laying out the dynamic-routing/vSwitch options. They’re clever, but I agree the blast-radius is too high for production—losing the /32 route would strand us outside the box.
Fortunately the customer can live with two sub-interfaces on the carrier side, so we’ll stick to the best-practice design with a dedicated DMI VLAN for VS0 and a separate data VLAN for the VR/tenants. Your warning helps me justify that choice when we do the final review.
Appreciate the guidance
Why VPN directly into VS0? They can VPN into some other VS and then route to the management VLAN from there, can't they? You can even add another link to the management VLAN off a VS that isn't VS0, just put it on a different physical interface from VS0's DMI. It's a pretty common setup.
Hi,
From your information it sounds to me option 1 is DMI (Dedicated Management Interface) and option 2 is non-DMI.
Non-DMI is not supported anymore.
In Legacy VSX, VS's do not have a management interface. Policy installs are done via VS0.
The same for SSH access. Connect to VS0 en switch to the context of the VS.
Clish: set virtual-system <ID>
Expert: vsenv <ID>
Martijn
Exactly.
The R81.20 guide confirms that non-DMI is “deprecated and not supported” and that management traffic must hit VS0 directly.
I would personally go back to the carrier and say that you want a second internet / IPVPN on a dedicated port / VLAN for VS0 if the goal is to expose it to internet.
If the carrier dont permit it, i would add own routers to front the carrier with,
They would give you the flexibility needed and allows you to build a more standard VSX design.
I try to avoid VR and build with linknetworks or VSWITCH.
Dont over complicate things, it tents to bite you later.
Regards,
Magnus
Hi Magnus Holmberg,
Thanks a lot for weighing in.
I follow your NetSec videos on YouTube many of the walkthroughs there have already saved me hours so I appreciate the extra guidance here.
My goal was to get an official-ish sanity check that the two-VLAN (data + DMI for VS0) layout is the right and supported path before I push the customer to accept it. Your reply gives me exactly that validation so I’ll go back to the carrier and insist on the extra segment or insert our own edge routers if they still refuse.
Much appreciated
It would also make VSnext much easier to future proof the design.
/Magnus
I do a similar thing for one of my customers today. Consider this alternative instead:
* For MDS, make a specific management domain for VS0 ONLY; this only holds the VSX cluster and Virtual Switches; for the VSX cluster, you can attach your physical/logical interfaces (eth1, bond0, etc.) as VLAN-capable interfaces
* Instead of a VR, use a Virtual Switch (in VS0)
* For each VS (in their own target domains), assign their external interfaces to the virtual switch (wrp256, wrp384, etc.) Configure this interface IP for the IPs from the /26 subnet; yes this puts each virtual system on the same /26 but you seem to already have that in effect.
* Create a separate virtual system for the Remote Access VPN clients (as Emma said, do not use VS0 for this); use this VS like you would any other. Or, you can use whatever is your first VS instead; whatever you want, just not VS0.
* For management of VS0, that can still be a separate physical interface (DMI) you attach to your LAN switches wherever you need it. If you need externally-reachable access to VS0, on VLAN 10, then you can still do that, but only for VS0
* If you need BGP, configure that on the virtual systems on each cluster gateway (set virtual-system 5; set as 65515; ...); you already know VR can't do BGP.
* You already noticed, but you can't configured static routes in CLISH on a VS (even VS0), but you can run "set vsx off" and configure the route, then run "set vsx on" again.
* Please don't use PBR. 🙂 (PhoneBoy schooled me on this years ago when I thought I was being "clever")
[Edit: as others have noted, all of your MDS and domain connections only traverse VS0; they don't connect to each VS]
Hopefully this, or some element of it, can help!
Thank you, Duane_Toler.
The main issue in my case is that each VS is assigned to a different customer, and each customer has its own Domain that contains only that VS.
The requirement is that each remote customer must be able to manage only their own VS remotely.
My difficulty is the following: how can I do that without terminating the admin VPN on VS0, since VS0 is the only VS that has access to the management network (where the Domains / MDS are reachable)?
The VPN-VS you suggested does not have access to the management network, so even if the remote client connects successfully to that VPN-VS, they still cannot reach their Domain Management Server / SmartConsole target.
So the real question is: in this design, how do you provide remote customer access to their own Domain for management purposes, while keeping VS0 as the only path to the management network? Additionally, the MDS uses private IP addresses, which further complicates direct remote access.
Thanks again for your help.
You dont give access to VS0 to customers.
What would be the reason to give access to CLI access to a VS, sounds like made for disaster that would affect other customers.
You give access to CMA within the MDS to a customer and they manage the VS from he CMA via the VS0.
VSNext changes some, but legacy is like above.
The CMA you can NAT behind a public IP seen from the customer, and you can have the same NAT for all customers.
Just NAT it within the VS that belongs to the customer and change the dest IP.
Regards,
Magnus
Why would your VPN VS not have access to the management VLAN? Why the requirement to only allow access to that via VS0?
As others here have already noted, you don't give access to VS0 in "Legacy VSX". If they have access to the VSX cluster/gateways, they can manage anyone else's VSX instances, too. You can enable per-VS SNMP and they can do some monitoring of their own VS. They can still have SmartConsole access to their own management domain (this is the intent of MDS), but you will want to configure specific administrator profiles to limit certain administrators from being able to manage VSX items; not everyone of their people should be able to edit and accidentally break their VS by clicking random things in SmartConsole.
You can NAT their MDS domain with static NAT at whichever VS handles your default route traffic. You'll configure the NAT in that VS yourself. This NAT doesn't matter for your customer's SmartConsole connection. Just get the traffic to the MDS server; it doesn't have to pass through "their" VS.
If you require them to connect with Endpoint VPN instead, you can use access roles tied to their user account (AD, local/internal, SAML for MFA, whatever you use) and configure the access role with a rule to allow that to access the MDS/CMA internal IP.
For the VPN VS, this assumes all of your VPN client users, for all purposes, are using the same VS since Office Mode is configured on the gateway itself. In which case, your internal LAN L3 switches will route traffic back to the VPN VS, unless you do "routing tricks" and advertise it from the VS [I do this for some customers; it works via route-map]. From that VS, your MDS should be reachable via VPN clients; it's just internal routing to get the return packets back to the VPN VS for the Office Mode network.
Your last comment said the MDS is on the LAN segment as the VS0 management interface. Does your MDS have its default gateway set to the VSX cluster VIP on that DMI network? If so, this may be your primary issue. One of your diagrams shows VS0 having a direct IP on VLAN 10 from the ISP's CSR. You won't really need that, because the VS0 cluster VIP should be on its own LAN segment; the MDS is not required to be on this LAN segment, too, but it's ok if it is.
For this LAN segment, put an L3 SVI on the switches add it to your internal LAN routing VRF, then make that SVI your default gateway for the MDS. You will also make it the gateway for the VSX gateways (VS0) so VS0 has outbound access, but not necessarily inbound. VS0 doesn't need to have a publicly-reachable interface, either. If your customer is pushing for this, then they are misunderstanding how VSX works (it really is unique).
With these edits, you can achieve exactly what you want: Customers connect from the outside to their CMA (with or without VPN, or both, your choice), VS0 remains privately internal, MDS on VS0's LAN segment, Virtual Switch for the /26 and each VS gets its own IP from the virtual L2 segment, no mainline traffic passing through VS0 (as Check Point states). Each VS can handle its own routing requirements. Via Endpoint VPN client, VSX administrators can SSH into VS0 gateways to do work (because VSX gateways default route will be the new L3 SVI in that VRF which can route office mode back to the VPN VS).
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
| User | Count |
|---|---|
| 67 | |
| 26 | |
| 13 | |
| 12 | |
| 12 | |
| 9 | |
| 8 | |
| 8 | |
| 8 | |
| 7 |
Tue 21 Apr 2026 @ 05:00 PM (IDT)
AI Security Masters E7: How CPR Broke ChatGPT's Isolation and What It Means for YouTue 28 Apr 2026 @ 06:00 PM (IDT)
Under the Hood: Securing your GenAI-enabled Web Applications with Check Point WAFTue 21 Apr 2026 @ 05:00 PM (IDT)
AI Security Masters E7: How CPR Broke ChatGPT's Isolation and What It Means for YouTue 28 Apr 2026 @ 06:00 PM (IDT)
Under the Hood: Securing your GenAI-enabled Web Applications with Check Point WAFTue 12 May 2026 @ 10:00 AM (CEST)
The Cloud Architects Series: Check Point Cloud Firewall delivered as a serviceThu 30 Apr 2026 @ 03:00 PM (PDT)
Hillsboro, OR: Securing The AI Transformation and Exposure ManagementAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY