- Products
- Learn
- Local User Groups
- Partners
- More
MVP 2026: Submissions
Are Now Open!
What's New in R82.10?
Watch NowOverlap in Security Validation
Help us to understand your needs better
CheckMates Go:
Maestro Madness
Hi,
I am trying to understand the output of the "show-package" API call, we are using it on a software product that is parsing the policy table from Check Point firewallls.
The issue seems to be related with the "cluster-members-revision" in the API output -> it only shows one of the cluster members in the "target-name", and not the other. The value seems to change, usually on the policy installation, it shows either cluster 01 (cpfw-01) or cluster 02 (cpfw-02) depending on the policy installation revision being made. It doesn't matter which cluster member is currently active. I am attaching the output below for 2 different VS on one policy (red marked is the issue). Our parsing tool then assumes the policy is being installed only on one cluster member, and shows an incorrect rule check while traversing the path.
I am not sure if this is the expected behavior or am I missing something?
We are running MDS version 81.10 (Take 87), the gateways are VSX on 80.40 (Take 158 and Take 173).
mgmt_cli show package name "Standard-TEST" -d 10.10.10.10 --format json
{
"uid" : "8ada3f0e-83ad-4632-b7fe-e1a196effc67",
"name" : "Standard-TEST",
"type" : "package",
"domain" : {
"uid" : "16ede1fe-360c-1a40-8aa6-68fc2a03d3d0",
"name" : "cp-mgmt",
"domain-type" : "domain"
},
"installation-targets-revision" : [ {
"cluster-members-revision" : [ {
"target-name" : "cpfw-01_TEST-GW-A",
"target-uid" : "793c1d93-4391-455a-ae27-1835237c395a",
"revision" : {
"uid" : "61e3a516-a907-4eb8-97b2-66b509bb7618",
"name" : "test@27.3.2023.",
"type" : "session",
"domain" : {
"uid" : "16ede1fe-360c-1a40-8aa6-68fc2a03d3d0",
"name" : "cp-mgmt",
"domain-type" : "domain"
},
"icon" : "Objects/worksession",
"color" : "black"
}
} ],
"target-name" : "TEST-GW-A",
"target-uid" : "2497f730-ba9d-4765-b48c-270be5965404"
},
{
"cluster-members-revision" : [ {
"target-name" : "cpfw-02_TEST-GW-B",
"target-uid" : "1f287688-d7f6-4603-b9d2-9d5472e27267",
"revision" : {
"uid" : "11557247-cdec-4750-a128-c468889286ef",
"name" : "test@17.2.2023.",
"type" : "session",
"domain" : {
"uid" : "16ede1fe-360c-1a40-8aa6-68fc2a03d3d0",
"name" : "cp-mgmt",
"domain-type" : "domain"
},
"icon" : "Objects/worksession",
"color" : "black"
}
} ],
"target-name" : "TEST-GW-B",
"target-uid" : "4efad116-0102-412d-bf2d-d59fbe3559d6"
},
Looks like both gateways are returned in the API call.
What am I missing here?
Maybe I wrote it a little bit confusing, let me rephrase.
There are two gateways (GW-A and GW-B) in a cluster (in fact there are many more in a cluster, but this is just an example), they are all currently active on site cpfw-01:
a1) cpfw-01_TEST-GW-A
a2) cpfw-02_TEST-GW-A
b1) cpfw-01_TEST-GW-B
b2) cpfw-02_TEST-GW-B
The API output is:
"cluster-members-revision" : [ {
"target-name" : "cpfw-01_TEST-GW-A",
...
"cluster-members-revision" : [ {
"target-name" : "cpfw-02_TEST-GW-B",
After a policy install the API output is:
"cluster-members-revision" : [ {
"target-name" : "cpfw-01_TEST-GW-A",
...
"cluster-members-revision" : [ {
"target-name" : "cpfw-01_TEST-GW-B",
After another policy install the API output is:
"cluster-members-revision" : [ {
"target-name" : "cpfw-02_TEST-GW-A",
...
"cluster-members-revision" : [ {
"target-name" : "cpfw-02_TEST-GW-B",
And so on, it seems to randomly return either cpfw-01 or cpfw-02, no matter what site is active, I would expect it to return both sites, in example:
cluster-members-revision" : [ {
"target-name" : "cpfw-01_TEST-GW-A",
"cluster-members-revision" : [ {
"target-name" : "cpfw-02_TEST-GW-A",
...
cluster-members-revision" : [ {
"target-name" : "cpfw-01_TEST-GW-B",
"cluster-members-revision" : [ {
"target-name" : "cpfw-02_TEST-GW-B",
Below is the description from the API reference:
Are you installing the same policy to the clusters in question at the same time?
@Omer_Kleinstern can you comment here?
Assuming you are using the option install-on-all-cluster-members-or-fail option (this is default), then all the cluster members will have the pushed policy.
Yes, installing the same policy using the default option.
The cluster members have the same pushed policy, that is correct, but anyways it seems a little bit "incomplete" to get the output only from one site.
I will see if I can make a test with a cluster members that don't have the same pushed policy (will try to shut down one site and then push the policy), and then check the API output.
I don't disagree since the docs say the API should output information from both cluster members.
It's worth a TAC case: https://help.checkpoint.com
Regardless, I believe it is a largely "cosmetic" issue.
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
| User | Count |
|---|---|
| 4 | |
| 1 | |
| 1 | |
| 1 | |
| 1 | |
| 1 | |
| 1 | |
| 1 | |
| 1 | |
| 1 |
Tue 16 Dec 2025 @ 05:00 PM (CET)
Under the Hood: CloudGuard Network Security for Oracle Cloud - Config and Autoscaling!Thu 18 Dec 2025 @ 10:00 AM (CET)
Cloud Architect Series - Building a Hybrid Mesh Security Strategy across cloudsTue 16 Dec 2025 @ 05:00 PM (CET)
Under the Hood: CloudGuard Network Security for Oracle Cloud - Config and Autoscaling!Thu 18 Dec 2025 @ 10:00 AM (CET)
Cloud Architect Series - Building a Hybrid Mesh Security Strategy across cloudsAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY