anyone had experience with 23.5, and 23.8k appliances? If yes in what configuration?
23800, R80.10 VSX, < 10 VSes, static routing, peaking 10 Gbps, bond to the core. Nothing too much with fancy blades. FW, ips, ia. So far so good. Nothing to complain about. Just ordered another pair to replace 13800
Is it single 23800 VSX or cluster?
Kaspar, I assume your configuration is single 238k model?
Sorry, Easter break from work.. no it's a cluster.:)
We have 2 VRRP Clusters on 23800 Appliances as datacenter core firewalls with a lot of VLANs (150+) on 10GB Bonds and actually have issues with the VRRP failover (routed seems to hang, case open) as well as some false positives on the power supply monitoring via snmp. Other than that, those applicances are very fast, but cost a fortune :-)
why vrrp not ccp?
I've deployed a number of those with R77.30 VSX in ClusterXL. Number of VS' on each cluster with variety of configurations.
Generally, all is well. The only thing of note was an incorrect memory reporting by show asset all command.
We have many 23500/23800 deployments, in several different scenarios (with and without Blades, clusters -usually ClusterXL-, regular Gateways and VSX...)
Are you looking for anything specific? Hardware related issues maybe?
Needed to know memory/CPU concerns/considerations and performance before purchase commitment. I settled with 2x23.5(s). Now configuration fun: since VS licensing comes as VSLS, does it mean to fully utilize functionality VSX gateways have to be configured in load sharing mode, not HA?
I would look at it differently: when and if you'll get to the third appliance, you'll have an option of taking advantage of VSLS.
With 2 appliances, the HA is a better option (personal opinion). Otherwise, you'll have to keep track on utilization in order to avoid overloading the systems.
Valid point however theoretically, if gateway has enough power to run 10 VM(s) in VLSM mode it should be fine since only half of active VS(s) will be running on one gateway at given time (5act and 5 standby on each? What would be the breaking point to consider 3rd VSX gateway?
When VS' were 32 bit, it was easier to answer this question, since we knew the maximum allocated vRAM.
With 64 bit VSX memory consumption is dynamic, so you'll have to monitor total active (non-cached) memory consumption of the each member of VSX cluster is lower than 50%. Otherwise, when one of the appliances is down, you may end-up with underperforming VS.
Good info, thank you. Anything else to watch for or be prepared for before I disappear in VSX forest?
Just that you'll lose the WebUI when you designate units as VSX. Not the end of the world, but for consistency sake, pre-configure both units same way and check the diff between configs. Primarily applicable to routing, if you are planning to use any advance features. And, of course, NTP sync the units before clustering them. If your Check Point infrastructure is not huge, consider configuring static DNS entries for all of its components on each unit as well as verify access to the Internet for licensing and CPUSE.
Best of luck!
Cool stuff, info I was looking for, thank you. One more Q: I am planning to have 2 VSX gateways in SL mode with VS(s) in HA mode on them. Theoretically should work? Crazy, stupid or potential subside?
You mean VSX' in LS and VS' in HA? I'd suggest asking Kaspars Zibarts, as he is working with VSX more than I do, but if my recollection is correct, VSLS is enabling 3 instances of the same VS, active, standby and a backup. Not sure what that does to resource consumption, but I think it'll be on par with HA with the backup version being normally suspended.
The only advantages to this approach is the ability to rapidly expand the cluster without changing its mode and equal stress of the hardware.
From the point of view of redundancy and failover time, I do not believe you'll gain anything.
Try to find an ARTG articles on VSX, those may come handy.
Vladimir, you are correct regarding configuration. Do you know where I can find information on why some VSX CLI commands blocked? There is no explanation why, it is simply blocked? Particular command I need is to add brdge.
Could it be that you have chosen one of the preset VSX models instead of "Custom Properties"?
And are those interfaces defined as "Physical Interfaces"?
Yes and yes :only custom template and can see physical interfaces but no love from bridging side.
I know about this one, it creates VS in bridge mode. I am looking to create bridge, option available via Gaia UI before VSX disables it.
If I understand what you are trying to accomplish correctly, I suspect you will not succeed in it.
You are attempting to create a bridge and present it to the VSX as a fait accompli to VS0.
As far as I understand it you can create bundles, trunks, subinterfaces and define routing parameters, but you cannot create a bypass bridge.
How would you manage security on such a bridge?
What is the reason for creating it outside of the context of VS residing on the cluster?
Bridging 2 physical interfaces with goal to connect each interface to upstream border routers, (different ISPs). BTW, creating bridge in Gaia UI before creating VSX keeps configuration in CLI and allowed me to crate VS and attach it to the bridge interface created before VSX took over. Setting physical connections to see if it behaves.
Can you post a screenshot of the VSX topology with bridge present, VS with the bridge attached to it as well as sanitized gaia config?
If you have more than one VS configured, can you check if this pre-defined interface is present in the context of each VS?
I am curious to see how it looks.
When you are saying that WebUI configured bridge is present in CLI, are you implying that the one configured in SmartConsole is not?
Have you published and installed the policy and then confirmed its absence from config?
I will, trying to ping thru 2 bridges interfaces This is from CLI none of the commands available to add vi cli after Gaia UI is disabled:
Pining through the bridge interfaces eth1/eth2 does not work, looks like VS needed to control bridging:
add bridging group 23
add bridging group 23 interface eth2
add bridging group 23 interface eth3
set interface br23 comments "br23"
set interface br23 state on
set interface eth2 link-speed 1000M/full
set interface eth2 state on
set interface eth2 auto-negotiation on
set interface eth2 mtu 1500
set interface eth3 link-speed 1000M/full
set interface eth3 state on
set interface eth3 auto-negotiation on
set interface eth3 mtu 1500
Did not make it work yet. Wonder why vBridge option does not exist in VSX only VS in bridge mode. With VS in bridged mode managed to create STP loop, had to disable interfaces. Did not have time to work on this further.
There is a pretty good write-up on the subject in "Check Point VSX Administration Guide R80.10" on page 44.
Highly recommend giving it a one-over.
Retrieving data ...