- Products
- Learn
- Local User Groups
- Partners
- More
MVP 2026: Submissions
Are Now Open!
What's New in R82.10?
10 December @ 5pm CET / 11am ET
Announcing Quantum R82.10!
Learn MoreOverlap in Security Validation
Help us to understand your needs better
CheckMates Go:
Maestro Madness
Hello all,
Works fine when i create hosts, but when I remove them(from the map), terraform tries to delete the host before removing it from the group. CP API is then giving an error that it can't delete an used object.
The destroy happens before the update-in-place and the only way to change that is to use create_before_destroy but then I run into other issues with publishing/installing policies, because those use destroy and then create replacement.
Tried adding the replace_triggered_by to the group, but still it does update-in-place.
Any ideas how to solve this ?
Code:
``
locals {
clients = {
"client_1" = {
remote_ip = "10.100.200.1"
remote_port = "3001"
}
"client_2" = {
remote_ip = "10.100.200.2"
remote_port = "3002"
}
"client_3" = {
remote_ip = "10.100.200.3"
remote_port = "3003"
}
}
}
resource "checkpoint_management_host" "hosts_lab" {
for_each = local.clients
name = "host_${each.key}"
ipv4_address = each.value["remote_ip"]
ignore_warnings = true
nat_settings = {}
tags = []
lifecycle {
precondition {
condition = can(cidrsubnet("${each.value["remote_ip"]}/32",0,0))
error_message = "Must be valid IPv4 Address."
}
}
}
resource "checkpoint_management_group" "groups_lab" {
name = "group_terraformed"
members = values(checkpoint_management_host.hosts_lab)[*].name
ignore_warnings = true
depends_on = [ checkpoint_management_host.hosts_lab]
lifecycle {
replace_triggered_by = [checkpoint_management_host.hosts_lab ]
}
}
resource "checkpoint_management_service_tcp" "tcp_service" {
for_each = local.clients
name = "tcp_${each.key}"
port = "${each.value.remote_port}"
session_timeout = 3600
match_for_any = true
sync_connections_on_cluster = true
ignore_warnings = true
aggressive_aging = {
enable = true
timeout = 360
use_default_timeout = false
}
keep_connections_open_after_policy_installation = true
tags = []
lifecycle {
precondition {
condition = (
each.value["remote_port"] >= 1000 &&
each.value["remote_port"] <= 65000
)
error_message = "Port number must be between 1000 and 65000"
}
}
}
resource "checkpoint_management_access_rule" "in-policy-FWL_VS1" {
for_each = local.clients
name = "${each.key}"
layer = "FWLVS1_policy Network"
position = { top = "top" }
source = ["existing_group"]
destination = ["host_${each.key}"]
service = ["tcp_${each.key}"]
action = "Accept"
track = {
accounting = true
type = "Log"
per_connection = "true"
}
depends_on = [ checkpoint_management_host.hosts_lab, checkpoint_management_service_tcp.tcp_service ]
action_settings = {
enable_identity_captive_portal = false
}
content = []
custom_fields = {}
time = []
}
resource "checkpoint_management_access_rule" "in-policy-FWL_VS2" {
for_each = local.clients
name = "${each.key}"
layer = "FWLVS2_policy Network"
position = { top = "top" }
source = ["existing_group"]
destination = ["host_${each.key}"]
service = ["tcp_${each.key}"]
action = "Accept"
track = {
accounting = true
type = "Log"
per_connection = "true"
}
depends_on = [ checkpoint_management_host.hosts_lab, checkpoint_management_service_tcp.tcp_service ]
action_settings = {
enable_identity_captive_portal = false
}
content = []
custom_fields = {}
time = []
}
resource "checkpoint_management_publish" "unstable_lab" {
triggers = toset([sha1(jsonencode([
checkpoint_management_host.hosts_lab,
checkpoint_management_access_rule.in-policy-FWL_VS1,
checkpoint_management_access_rule.in-policy-FWL_VS2,
checkpoint_management_service_tcp.tcp_service,
]))])
depends_on = [checkpoint_management_host.hosts_lab, checkpoint_management_access_rule.in-policy-FWL_VS1, checkpoint_management_access_rule.in-policy-FWL_VS2, checkpoint_management_service_tcp.tcp_service]
}
resource "checkpoint_management_install_policy" "FWL_VS1" {
policy_package = "FWLVS1_policy"
targets = ["FWLVS1"]
triggers = toset([sha1(jsonencode([
checkpoint_management_host.hosts_lab,
checkpoint_management_access_rule.in-policy-FWL_VS1,
checkpoint_management_access_rule.in-policy-FWL_VS2,
checkpoint_management_service_tcp.tcp_service,
]))])
depends_on = [checkpoint_management_host.hosts_lab, checkpoint_management_access_rule.in-policy-FWL_VS1, checkpoint_management_access_rule.in-policy-FWL_VS2, checkpoint_management_service_tcp.tcp_service, checkpoint_management_publish.unstable_lab ]
}
resource "checkpoint_management_install_policy" "FWL_VS2" {
policy_package = "FWLVS2_policy"
targets = ["FWLVS2"]
triggers = toset([sha1(jsonencode([
checkpoint_management_host.hosts_lab,
checkpoint_management_access_rule.in-policy-FWL_VS1,
checkpoint_management_access_rule.in-policy-FWL_VS2,
checkpoint_management_service_tcp.tcp_service,
]))])
depends_on = [checkpoint_management_host.hosts_lab, checkpoint_management_access_rule.in-policy-FWL_VS1, checkpoint_management_access_rule.in-policy-FWL_VS2, checkpoint_management_service_tcp.tcp_service, checkpoint_management_publish.unstable_lab, checkpoint_management_install_policy.FWL_VS1 ]
}
resource "checkpoint_management_logout" "unstable_lab" {
triggers = ["${timestamp()}"]
depends_on = [checkpoint_management_host.hosts_lab, checkpoint_management_access_rule.in-policy-FWL_VS1, checkpoint_management_access_rule.in-policy-FWL_VS2, checkpoint_management_service_tcp.tcp_service, checkpoint_management_publish.unstable_lab, checkpoint_management_install_policy.FWL_VS1, checkpoint_management_install_policy.FWL_VS2]
}
``
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
- destroy
-/+ destroy and then create replacement
Terraform will perform the following actions:
~ resource "checkpoint_management_group" "groups_lab" {
id = "cb645dd5-5221-445a-ab4b-d12d8bab0a61"
~ members = [
- "host_client_3",
# (2 unchanged elements hidden)
]
name = "group_terraformed"
tags = []
# (3 unchanged attributes hidden)
}
-/+ resource "checkpoint_management_install_policy" "FWL_VS1" {
~ id = "install-policy-nrqihmvykd" -> (known after apply)
~ task_id = "1ccf9d30-b246-43f4-8ce8-1c8cc2b5bb49" -> (known after apply)
~ triggers = [ # forces replacement
+ "00634fa4a304ad78e0d01badc15de0e3859b95e1",
- "b9d1295431895bedb0005b1b1a877bed2f451200",
]
# (4 unchanged attributes hidden)
}
-/+ resource "checkpoint_management_install_policy" "FWL_VS2" {
~ id = "install-policy-br8oh3nykd" -> (known after apply)
~ task_id = "95c7efda-2ac7-48ed-a72f-235fa835e0d8" -> (known after apply)
~ triggers = [ # forces replacement
+ "00634fa4a304ad78e0d01badc15de0e3859b95e1",
- "b9d1295431895bedb0005b1b1a877bed2f451200",
]
# (4 unchanged attributes hidden)
}
-/+ resource "checkpoint_management_logout" "unstable_lab" {
~ id = "logout-ypdyzdj9kg" -> (known after apply)
~ triggers = [
- "2023-01-03T18:20:43Z",
] -> (known after apply) # forces replacement
}
-/+ resource "checkpoint_management_publish" "unstable_lab" {
~ id = "publish-vgndg6ldby" -> (known after apply)
~ task_id = "01234567-89ab-cdef-8d12-8c39b51d80ed" -> (known after apply)
~ triggers = [ # forces replacement
+ "00634fa4a304ad78e0d01badc15de0e3859b95e1",
- "b9d1295431895bedb0005b1b1a877bed2f451200",
]
}
It's a terraform provider problem, not a management or api problem.
Also posted this on the provider github page and a developer acknowledged the bug and promised to solve in the next release.
https://github.com/CheckPointSW/terraform-provider-checkpoint/issues/135
The version i tried is v2.1.0 so look for the bug fix in the release notes of future versions.
Another note, I tried ignore_warnings/errors but it did not work also, the checkpoint api/gui does not allow you to delete a host that is part of a group.
What is the version/JHF of management?
What does a where-used on the relevant object show?
HOTFIX_R81_10_JUMBO_HF_MAIN Take: 66
The relevant object showed up in the group I created with terraform. Process.
1. create host x with terraform
2. create group y with members host x with terraform
at this point host x was used in group y.
3. when trying to delete host x, terraform does two things:
3a. delete: host x
3b. update-in-place: remove host x from group y
Since terraform is trying to do 3a before 3b, API gives error that the group is used.
I once had hell of a time with trying to delete identity provider object that was referenced with a specific gateway. I must have spent close to 3 hours with TAC on the phone until we finally got it...had to re-log back into Guidbedit close to 20 times and remove every single reference of it.I hope your case is not going to be like mine, but Gudbedit is always good place to start, because once removed from database, you will not have any issues with smart console.
It's a terraform provider problem, not a management or api problem.
Also posted this on the provider github page and a developer acknowledged the bug and promised to solve in the next release.
https://github.com/CheckPointSW/terraform-provider-checkpoint/issues/135
The version i tried is v2.1.0 so look for the bug fix in the release notes of future versions.
Fri 12 Dec 2025 @ 10:00 AM (CET)
Check Mates Live Netherlands: #41 AI & Multi Context ProtocolTue 16 Dec 2025 @ 05:00 PM (CET)
Under the Hood: CloudGuard Network Security for Oracle Cloud - Config and Autoscaling!Fri 12 Dec 2025 @ 10:00 AM (CET)
Check Mates Live Netherlands: #41 AI & Multi Context ProtocolTue 16 Dec 2025 @ 05:00 PM (CET)
Under the Hood: CloudGuard Network Security for Oracle Cloud - Config and Autoscaling!Thu 18 Dec 2025 @ 10:00 AM (CET)
Cloud Architect Series - Building a Hybrid Mesh Security Strategy across cloudsAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY