Create a Post
Showing results for 
Search instead for 
Did you mean: 

VSX: Moving VLAN to another VS within the same VSX-Cluster

Hi community,

in my daily business I am faced with a problem for years now and would like to hear if you guys have a better solution to overcome my problem.

I am running serveral VSX environments with a bunch of virtual systems.
Regularly I have the need to move a VLAN from one virtual system to another virtual system within the same VSX-Cluster.
The VLAN is not connected to a virtual switch as it would too expensive to connect all VLANs to a seperate virtual switch.
All VLANs are behind the same bond interface except the external interface.

Example setup:

example setupexample setup

When moving a VLAN from one virtual system to another within the same VSX-Cluster I am facing the following problems:
You are not allowed to add the same VLAN to multiple virtual systems using the same bond interface.

I consequence I know two possibilities to overcome this problem, but both don't make me happy:

1. Deleting the VLAN on VS2, installing policy on VS2, adding VLAN on VS3, installing policy on VS3.
As VS2 and VS3 are running in the same SmartCenter/Domain this means downtime of minimum 10 minutes.

2. Adding a new physical link to the same Switch, configuring the new VLAN to VS3 with duplicate IP address using the new physical link and moving the VLAN on switch side from the old physical link to the new one.
In this scenario the downtime is acceptable, but you always need two links the the same switch and you a loosing flexibility as you need support of the switch guys.
Moreover in some environements I do not have free interfaces on firewall side so I don't have the possibility to add a second link to the same switch.

Any ideas how to overcome this problem?
The coolest thing would be a nice and smooth solution provided by Check Point.
I started asking Check Point years ago, but didn't get a solution, yet.

Looking forward to read your ideas.


0 Kudos
13 Replies

@Sven_Glock I think you have answered your own question here. Option one is the way to go. It causes downtime on the particular VLAN, which is, however, expected event for moving a physical interface from one GW to another. 

0 Kudos

Having a short downtime is an accaptable thing, but 10 mins+ is out of accaptable range.

As accelerated policy installation is still far away for my environment do you have any ideas how to accelerate option one?

I am struggeling with the policy installation on vs3.
Policy installation is necassary when adding a new interface with a new ip network, because of spoofing objects, etc.

But how about this scenario:

  • I add the new network with a dummy VLAN to VS3. --> Apply --> Install policy.
  • Now spoofing, routing etc. is fine.
  • Next I delete VLAN on VS2 --> Apply --> Install policy
  • Then just changing the dummy VLAN to the final VLAN ID and Apply. Will the new network be up and running without policy installation?

    Its just guessing, but is there a need to install policy when just changing a VLAN?
    If not this would reduce downtime by 30%. It's still a lot but less worse.

    More ideas welcome 😎
0 Kudos

Is the VLAN you are moving the highest or lowest VLAN ID on the interface for the VS which is currently handling it?

Will it be the highest or lowest VLAN ID on the interface on the new VS you're moving it to?

If the answer to both questions is no, you may be able to get away with removing it from the old VS, provisioning (but not pushing policy), adding it to the new VS, provisioning, pushing policy to the new VS, then pushing policy to the old VS. The outage would last from when you provision the old VS to when you finish pushing policy to the new VS.

If the answer to either question is yes, this may not be safe. By default, ClusterXL monitors the highest and lowest VLAN IDs on each interface. If it's the highest or lowest on the old VS, provisioning the removal there could cause spontaneous failover. If it's the highest or lowest on the new VS, you should be fine, but may see failovers when policy is first installed. Depending on how you do sync, the failovers could trigger active contention which could result in neither firewall taking over the cluster (direct-wired sync is particularly bad for this).


I would test this a lot. Like a LOT a lot. ClusterXL should only care about VIP uniqueness within a VS, but I may be misremembering.


I am always monitoring all vlans - even it it costs some ressources - never trust a switch guy 😊

I will test it in my lab after some vacation and will keep you updated about the outcome.

Okay, then your only option is to provision the removal, push the removal, provision the new interface, push the new interface. There’s not a faster way.

0 Kudos

We experienced instability/flapping when only removing and provisioning on the source vs, so be careful with that approach. 

0 Kudos

Thanks for your advice! 👍

0 Kudos

I'm not facing this problem, but I think that you can create dedicated TMP VLAN for migration and always ask switching guys to map migrated vlan to this TMP vlan.

After that migration should be faster:

  1. Create TMP vlan (with all configuration) on target VS
  2. Removing migrated vlan from previous VS
  3. Change TMP vlan to migrated on target VS


0 Kudos

"You are not allowed to add the same VLAN to multiple virtual systems using the same bond interface."


Hi, i know about this limitation, but it seems that is not mentioned on VSX Admin Guide. Am i correct?


Thank you

0 Kudos

If you want to have the same VLAN on the same BOND to multiple VS. you will use a virtual switch within VSX. And connect both the VS to the virtual switch.

Thank you very much Magnus; last question, the only alternative is to create the same vlan to different physical interface/bond, it is right?

0 Kudos

Virtual switch or virtual router or diff physical interfaces / bond.

Virtual switch would be the most common way to solve it, if it’s to the same L2 environment and actually the same VLAN.

I do not understand why you need an alternative to a virtual switch. This is a widely used and stable solution for your requirement.

0 Kudos