Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Pearl

New! R80.30 feature: Management Data Plane Separation (for gateways with 4+ cores)

rs1810300033.png

I really like the all new R80.30 feature for separating management from data traffic via

  • Routing Separation and
  • Resource Separation

as described in sk138672.

 

Did anyone test this already?

25 Replies
Platinum

it is about time! finally arrived.
will test it soon and report back 🙂
Jerry
0 Kudos
Highlighted

About time!  This is a long over due feature!

0 Kudos
Highlighted
Platinum

"Use of logical interfaces is not suppoted on management interface (Alias, Bridge, VPN Tunnel, 6in4 Tunnel, PPPoE, Bond, VLAN)"

1. It is a pity. Showstopper for us.
2. There is typo (suppoted  -> supported).

Kind regards,
Jozko Mrkvicka
Highlighted

Very interesting information.

I will test it tomorrow in our LAB:-)

Thank you!

Highlighted

With Resource Separation the cpu load should not rise when installing the policy. Is that correct?

mng.PNG

 

Highlighted

Hi Dameon,

Do I need a license for the management instance or lose a core license?

Regards

Heiko

Highlighted
Admin
Admin

I assume this dedicated CPU core is treated like any other core: you need a license for it. A minimum of 8 CPU cores are required to use this feature, which means your Open Server license must be for at least 8 cores. Beyond that, no special licensing requirements.
Highlighted
Pearl

So anything below 5900 will not be able to take advantage of it...

0 Kudos
Highlighted
Admin
Admin

Sounds about right.
0 Kudos
Highlighted
Pearl

Danny, you may want to change the heading by adding "for gateways with 8 or more cores".

Otherwise it leads to unwarranted euphoria 🙂

Highlighted
Pearl

Added.

Highlighted

I do have a concern about the best practices from the article:  https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solut...

 

"Connectivity to the LDAP and similar servers from the Gateway should be done via the Data Plane."

 

I've always been told that only the management/control plane of a security gateway should be making or allowing connections to the device.  The data plane should not allow or make connections, it should only play the role of traffic cop. 

What is everyone's thoughts on this?

Highlighted

FYI this was changed to min 4 recently 

0 Kudos
Highlighted
Pearl

Thanks for the info. I changed the thread title from 8  to 4 CPU cores.

0 Kudos
Highlighted

I tested and worked successfully. (resource&routing both enabled)

It's not possible to use mgmt ip on identity collector and i think it should be.

Could you please add a service/task for this issue?

 

XXXXXXXXX:TACP-0> show mdps state

Management Data plane separation:

Routing plane: Enabled
Dedicated resource: Enabled (FW-Instance [39,38], CPU [4,28])
Management interface: Mgmt
Sync interface: Sync
Management plane configured routes:
default via X.X.X.X

XXXXXXXXX:TACP-0> show mdps tasks

Management plane tasks:
Service: cpri_d
Service: ntpd
Service: sshd
Service: syslog
Process: cloningd
Process: httpd2
Process: ntpd
Process: snmpd
Process: snmpmonitor
Port;Protocol: 256;tcp
Port;Protocol: 257;tcp
Port;Protocol: 2010;tcp
Port;Protocol: 5432;tcp
Port;Protocol: 18181;tcp
Port;Protocol: 18183;tcp
Port;Protocol: 18184;tcp
Port;Protocol: 18187;tcp
Port;Protocol: 18191;tcp
Port;Protocol: 18192;tcp
Port;Protocol: 18195;tcp
Port;Protocol: 18210;tcp
Port;Protocol: 18211;tcp
Port;Protocol: 18264;tcp

0 Kudos
Highlighted

I tried mdps in the lab. I have two issues so far:

1. Can't make backup traffic to go over Mgmt interface, it attempts ssh connections on one of the data interfaces instead, even if I try to backup gateway to a management appliance.

2. TACACS traffic on port 49 goes over Mgmt interface during initial login the the gateway, which is expected, then it attempts to go over data interface for some reason during 2nd authentication when I try escalate my privileges from TACP-0 to TACP-15. 

These two issues are show stoppers for us to deploy this feature.

Highlighted

We have exactly the same issue with R80.40 latest ongoing JHF:

1. Identity Collector planned to communicate with different firewalls in different zones and the only common network is the mgmt

2. We also can't apply JHF using CPUSE GUI, just CLI

3. Lastly, the default route disappears occasionally after a reboot

4. Only one sync is supported 

Any light at the end of the tunnel? I wish it would work more like a routing domain with more flexibility.

Highlighted
Employee
Employee

Hi Alex,

Can you share the following please? 

'dbget -rv mdps'

Also why do you need more than once interface for Sync? you can use link aggregation (bond) instead.

0 Kudos
Highlighted

Hi Aviad, 

We wanted to have another Vlan as 2nd sync, already have the dedicated sync as a bond.

Here is the outputs you requested, as you can see we put "0.0.0.0" and default as the default used to drop off occasionally. 

Would good to understand if the Identity Collector can work with the management plane rather than data plane, this is the most major thing for us, the rest looks like bugs which will get fixed eventually.

 [Expert@BNZ_WDC3-FW1-Access-011:0]# dbget -rv mdps
mdps:interface:management eth0
mdps:interface:sync bond0
mdps:route:0.0.0.0/0:nexthop:10.x.x.x t
mdps:route:default:nexthop:10.x.x.x t

 

Thanks

0 Kudos
Highlighted
Employee
Employee

Please move clish to mplane environment:

'set mdps environment mplane'

Alternatively you can move confd to mplane permanently via 'add mdps task process confd' (requires reboot)

0 Kudos
Highlighted
Employee
Employee

Hi Ramazan,

Currently it's a limitation which is documented on the SK, I would suggest consulting solution center

0 Kudos
Highlighted
Copper

Hi everyone,

I have this in production with a R80.20 gateway and JHF 156. Works well so far except that customer's monitoring tool isn't able to discover data plane interfaces and counters, even if I follow sk138672 and use the "special community string" (..._dplane). Hope this will be gone with the next major upgrade. 😐

Does anyone here has other ideas and/or experience with mdps and interface monitoring using snmp? Could it help to disable resource separation?

Thanks

0 Kudos
Highlighted
Employee
Employee

Hi dj0Nz, i strongly recommend using R80.30

0 Kudos
Highlighted
Ivory

Hi,

Just a point of clarification because it's not 100% clear in the SK, is this supported on R80.30 on both 2.6 and 3.10 kernels or only on 3.10? I interpret the minimum requirements as requiring JHF Take 136 if using kernel 3.10 but because there is no mention of 2.6 it's unclear if the original R80.30 GA supports this or if there is no support on 2.6.

Thanks and Regards,

Ben

 

0 Kudos
Highlighted
Employee
Employee

Hi Ben,

MDPS is supported for R80.30 (2.6 and 3.10).

For 3.10 the requirement is to use JHF take 136