- Products
- Learn
- Local User Groups
- Partners
- More
AI Security Masters E7:
How CPR Broke ChatGPT's Isolation and What It Means for You
Call For Papers
Your Expertise. Our Stage
Good, Better, Best:
Prioritizing Defenses Against Credential Abuse
Ink Dragon: A Major Nation-State Campaign
Watch HereCheckMates Go:
CheckMates Fest
Hi,
I have an 1gb interface (eth5) that I would like to migrate as a vlan interface to an existing bond of two 10g interfaces (bond101.1).
I would like to know which steps exactly should I take to do so. It is a clusterXL, so it needs to be done on both members.
I saw some posts regarding this, but they are a little bit different from each other and old, so I would like to know which is the best option to do this today.
We are using R81.10, management is R81.20.
Thanks.
Is this a coincidence or extension of this discussion?
Is this a coincidence or extension of this discussion?
Pure coincidence 😂
Thanks!
I would definitely follow process Bob Zimmerman posted in the link Chris referenced, it works 100%.
Andy
For reference, here's the direct link:
The short explanation is that ClusterXL supports backing a cluster interface with a different logical interface on each member (e.g, you can have member 1 back the cluster VIP with eth5, then have member 2 back it with bond101.1). This isn't a common configuration, so I wouldn't leave it that way for more than a few hours.
Longer works fine, people just don't know what they're looking at when troubleshooting, and confusion extends outages.
One part that is missing for me is the dhcp relay part.
Probably should be configured between step 2 to 4.
Yeah, step 3 should really be "bring all the config over from the old interface to the new interface". DHCP relay, proxy ARP, interface-local routes (used for off-net VIPs, like how VSX works), and so on.
Thanks
One last question - it seems to me there is no downtime by following your method, am i correct?
Thats what I gather as well, though never personally tried it, but maybe @Bob_Zimmerman can say for sure.
There shouldn't be any downtime, but there may be PNOTEs and failovers. After all, you're changing the logical interfaces being monitored. This is why the process includes pinning the member down administratively until you're done with it and ready to fail over.
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
| User | Count |
|---|---|
| 8 | |
| 8 | |
| 4 | |
| 3 | |
| 3 | |
| 3 | |
| 2 | |
| 2 | |
| 2 | |
| 2 |
Tue 21 Apr 2026 @ 05:00 PM (IDT)
AI Security Masters E7: How CPR Broke ChatGPT's Isolation and What It Means for YouTue 28 Apr 2026 @ 06:00 PM (IDT)
Under the Hood: Securing your GenAI-enabled Web Applications with Check Point WAFTue 21 Apr 2026 @ 05:00 PM (IDT)
AI Security Masters E7: How CPR Broke ChatGPT's Isolation and What It Means for YouTue 28 Apr 2026 @ 06:00 PM (IDT)
Under the Hood: Securing your GenAI-enabled Web Applications with Check Point WAFTue 12 May 2026 @ 10:00 AM (CEST)
The Cloud Architects Series: Check Point Cloud Firewall delivered as a serviceThu 30 Apr 2026 @ 03:00 PM (PDT)
Hillsboro, OR: Securing The AI Transformation and Exposure ManagementAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY