Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Danny
Champion Champion
Champion
Jump to solution

New! R80.30 feature: Management Data Plane Separation (for gateways with 4+ cores)

rs1810300033.png

I really like the all new R80.30 feature for separating management from data traffic via

  • Routing Separation and
  • Resource Separation

as described in sk138672.

 

Did anyone test this already?

98 Replies
_Val_
Admin
Admin

@Luis_Miguel_Mig Which version are you running? Also, which kernel?

0 Kudos
Luis_Miguel_Mig
Advisor

r80.40 take 91, kernel 3.10

0 Kudos
_Val_
Admin
Admin

do you have MDPS enabled?

0 Kudos
Luis_Miguel_Mig
Advisor

routing but not resources

show mdps state

Management Data plane separation:

Routing plane: Enabled
Dedicated resource: Disabled
Management interface: eth3
Sync interface: bond0

0 Kudos
Aviad_Hadarian
Employee
Employee

There are no CPU allocation when resource separation is disabled.

0 Kudos
Luis_Miguel_Mig
Advisor

That is fine  but I was kind of expecting to see where it runs.
Looking at sk138672, I don't understand where it runs. Like a random core, one of the SND cores, one of the worker cores, in more than one core ...

0 Kudos
Aviad_Hadarian
Employee
Employee

Since there are no firewall/SND allocated to resource it can run on any core and it's not tagged

0 Kudos
Luis_Miguel_Mig
Advisor

Understood. Thanks Aviad

0 Kudos
Aviad_Hadarian
Employee
Employee

@Wajdi Hi, in case the CPU's are logical (Hyper-Threading) The resource will be allocated in multiplies of 2 (1 physical = 2 cores), for more info check the sk138672

0 Kudos
Wajdi
Explorer

I didn't activate SMT (HyperThreading). My appliance has 4 CPU. The MDS is taking 2 Physical CPUs (not virtual).
The MDPS is taking half of the appliance CPUs, this is my issue.
I don't find my answer on the sk138672.

0 Kudos
Aviad_Hadarian
Employee
Employee

@Wajdi 

Check on the sk for:

  • Controlling amount of CPU's for Resource separation:
0 Kudos
Wajdi
Explorer

I already did that: "set mdps resource cpus 1" (and I rebooted the device). That didn't release one CPU as I want (I still have two Physical CPUs allocated to MDPS)

0 Kudos
Luis_Miguel_Mig
Advisor

r80.40 take 94

PRJ-20515,
PRHF-14630 , In some scenarios, when using routing separation, connection to Management Plane via Data Plane is dropped.

What does it mean?

0 Kudos
Aviad_Hadarian
Employee
Employee

Hi @Luis_Miguel_Mig , Means that packet comes in data plane NIC1, goes out via NIC2 to switch/router and then comes back again from switch/router to NIC3 on management plane, Depending on rulebase packets might be dropped due to anti-spoof violation.

0 Kudos
Luis_Miguel_Mig
Advisor

The following healthcheck script looks at /sys/class/net/ files to check drops and errors

https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solut...

However when you have mdps enabled the only files present in that directory correspond to the interfaces configured in the mplane.

Where are the dplane interfaces?





0 Kudos
Aviad_Hadarian
Employee
Employee

Hi @Luis_Miguel_Mig, When you switch context the /sys/class/net changes to the relevant plane.

On sysfs only the relevant network interfaces are displayed.

0 Kudos
Luis_Miguel_Mig
Advisor

What do you mean by switching context?

[Expert@fw2:0]# cd /sys/class/net
[Expert@fw2:0]# ls -l
total 0
lrwxrwxrwx 1 admin root 0 Mar 19 14:13 bond0 -> ../../devices/virtual/net/bond0
-rw-r--r-- 1 admin root 4096 Mar 26 16:28 bonding_masters
lrwxrwxrwx 1 admin root 0 Mar 19 14:13 eth0 -> ../../devices/pci0000:00/0000:00:1c.4/0000:02:00.0/net/eth0
lrwxrwxrwx 1 admin root 0 Mar 19 14:13 eth11 -> ../../devices/pci0000:00/0000:00:02.0/0000:05:00.3/net/eth11
lrwxrwxrwx 1 admin root 0 Mar 19 14:13 eth3 -> ../../devices/pci0000:00/0000:00:1c.4/0000:02:00.3/net/eth3
lrwxrwxrwx 1 admin root 0 Mar 19 14:13 lo -> ../../devices/virtual/net/lo
[Expert@fw2:0]# mplane
Context set to Management Plane
[Expert@fw2:1]# ls -l
total 0
lrwxrwxrwx 1 admin root 0 Mar 19 14:13 bond0 -> ../../devices/virtual/net/bond0
-rw-r--r-- 1 admin root 4096 Mar 26 16:28 bonding_masters
lrwxrwxrwx 1 admin root 0 Mar 19 14:13 eth0 -> ../../devices/pci0000:00/0000:00:1c.4/0000:02:00.0/net/eth0
lrwxrwxrwx 1 admin root 0 Mar 19 14:13 eth11 -> ../../devices/pci0000:00/0000:00:02.0/0000:05:00.3/net/eth11
lrwxrwxrwx 1 admin root 0 Mar 19 14:13 eth3 -> ../../devices/pci0000:00/0000:00:1c.4/0000:02:00.3/net/eth3
lrwxrwxrwx 1 admin root 0 Mar 19 14:13 lo -> ../../devices/virtual/net/lo

0 Kudos
M_Ruszkowski
Collaborator

We have ran into an issue using this new mplane feature.  We happen to run some custom scripts in bash that are executed from the "clish cron".  When we switched to the mplane, it broke these scripts because the routing is now on the mplane and no longer part of the dplane.   Basically these scripts performed certain backups and other tasks and transferred files from the gw to a server.  These routes are now over the mplane.  So we cant seen to find a way to get the custom bash scripts to use the proper routing.

0 Kudos
Aviad_Hadarian
Employee
Employee

How about adding to your script:


. /etc/profile.d/mdpsenv.sh

dplane / mplane to switch context

0 Kudos
M_Ruszkowski
Collaborator

I will give it a try....thank you

0 Kudos
M_Ruszkowski
Collaborator

At the top of my script... I added the section to source the CheckPoint script.   And this worked fine.

#!/bin/bash

# Source Checkpoint Variables
source /etc/profile.d/mdpsenv.sh
mplane

 

-----------------------------------------------------------------------------

When i execute it...it now works

[Expert@MYDEVFW03:0]# ./config-dump
Context set to Management Plane
Copying Files...
taring files...
MYDEVFW03-config.txt
fwkern.conf.txt
grub.conf.txt
local.arp.txt
[Expert@MYDEVFW03:0]#

 

You may want to add this info to the SK article for mplane / dplane separation.    Since this is new...You will find other customers that may have custom scripts.   

Thank again for the resolution!

 

 

0 Kudos
Aviad_Hadarian
Employee
Employee

You are welcome!

The above is not new but i will add it.

0 Kudos
M_Ruszkowski
Collaborator

Hmmm. - Not new...    Then i am confused with this thread.  🙂

"New! R80.30 feature: Management Data Plane Separation (for gateways with 4+ cores)"

 

All kidding aside, thank you for the help.   

 

0 Kudos
M_Ruszkowski
Collaborator

We have tested it, and basically this is still EA...needs work.   You are going to find that SNMP may not work with your enterprise tools once you implement this.  Basically if using SNMPv3, you will need to use the '-n plane' option to get SNMP data from the dplane.  So if you want the routes, interfaces, interface statistics, this is not going to work with some monitoring tools.  For us, id did not work with any of the Enterprise Monitoring tools that we use.   I do not think CP gave it a lot of thought when it was implemented.  This should be seamless to our tools.   I feel that CP's QA process was..."Look, I can get back data with snmpwalk from my linux box....call it good".   We are having to remove this from over 30 firewalls in our environment.  We must have monitoring working!

0 Kudos
M_Ruszkowski
Collaborator

Just found another issue with mplane separation if you are using Datacenter Objects / VCenter tags in the policy.   Because the process that connects to VCenter to pull the objects uses part of IA and PDP, this doesn't work when you enable the mplane.  The service "cprid" gets moved to the mplane and doesn't talk to the dplane.  So you see all the objects in the policy but the gateway doesn't enforce it.  Then you have to move "cprid" back to the dplane.     I really do not think Check Point has truly thought out the "dedicated management" separation.   If we are having to move services back to dplane, then you do not have a real solution.  At this point we have some services on dplane and others on mplane.  How can CP call this "management plane separation".  I do not think CP tested all their blades and features using this in their QA labs.   This should go back to EA.    Next we have had two kernel crahes on different gateways for anti-spoofing with mplane.   After three months of finding issues, we feel like we keep peeling back an onion layer and discovering another problem.   After this last issue, we are giving up on the "mplane" feature and will turning this off on all our firewalls. 

0 Kudos
RossM
Explorer

Hi,

We have enabled this on a standalone deployment but can't get to the Smart Console through the management plane.  Is there something else that needs allowing above the default ports and services to get this working?  This is running on a 3800 appliance on R81.20

0 Kudos
Aviad_Hadarian
Employee
Employee

Standalone or management is not supported with mdps

Luis_Miguel_Mig
Advisor

I tested this feature a long time ago and I am almost certain I was able to route traffic through the data plane and terminate it in the mgmt plane, for example ssh to fw mgmt ip.

However when I try it now (running r80.40 take 196), the syn packet doesn't get to the mgmt plane and I get a tcp reset (from the data plane I guess).

It seems that the behavior has changed, right?

Looking at https://support.checkpoint.com/results/sk/sk138672, it says "When routing separation is enabled, traffic from the management plane cannot be routed to or through the data plane".

Unfortunately it seems to be the expected behavior.



0 Kudos
CarlosDias
Contributor

Does this work on Open Serves ?

It accepts the commands but is stil disabled even after a reboot

 

0 Kudos
Aviad_Hadarian
Employee
Employee

It is supported on any gateway platform, note the minimal requirements.

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events