<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Cloudguard R82  Site2Site intermittent failures after upgrade to R82. in General Topics</title>
    <link>https://community.checkpoint.com/t5/General-Topics/Cloudguard-R82-Site2Site-intermittent-failures-after-upgrade-to/m-p/275319#M45976</link>
    <description>&lt;P&gt;Looks like we may finally have found the root cause of the issue.. &lt;STRONG&gt;sk163835&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;After running "fw ctl zdebug + conn drop vm link nat xlate xltrc", TAC spot`ed that the NAT-TRAVERSAL packets from the Azure gateway (line 5, bold) were having the sourceport nated..&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;EM&gt;@;198991747.74218;[vs_0];[tid_0];[fw4_0];fwx_get_xlbuf: SRV xlation buffer found for request: vmside=1, cli-&amp;gt;srv(1);&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;@;198991747.74219;[vs_0];[tid_0];[fw4_0];fwx_get_xldata: got (172.18.254.6,35d9,0.0.0.0,0 : 0) flags = 220, cli-&amp;gt;serv (1);&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;@;198991747.74220;[vs_0];[tid_0];[fw4_0];fw_xlate_packet: connection &amp;lt;dir 1, 172.18.254.6:4500 -&amp;gt; ONPREMISEGATEWAYIP:4500 IPP 17&amp;gt;, OUTBOUND(1);&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;@;198991747.74221;[vs_0];[tid_0];[fw4_0];fw_xlate: changing &amp;lt;dir 1, 172.18.254.6:4500 -&amp;gt; ONPREMISEGATEWAYIP:4500 IPP 17&amp;gt; to &amp;lt;dir 0, 172.18.254.6:13785 -&amp;gt; ONPREMISEGATEWAYIP:4500 IPP 17&amp;gt;;&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;@;198991747.74222;[vs_0];[tid_0];[fw4_0];After POST VM: &amp;lt;dir 1, &lt;STRONG&gt;172.18.254.6:13785&lt;/STRONG&gt; -&amp;gt; ONPREMISEGATEWAYIP:4500 IPP 17&amp;gt; (len=204) ;&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;@;198991747.74223;[vs_0];[tid_0];[fw4_0];POST VM Final action=ACCEPT;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;The Azure VNET is 172.18.0.0/16 and the FrontendSubnet is part of this VNET.&lt;BR /&gt;The last HIDE-nat rule in the rulebase was hiding 172.18.0.0/16 behind the gateway.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;This took a long time to find out, was working fine up to 7 hours on the most, and have been running fine for years on R81x, but after deploying R82 this suddently started getting messy. So definately a configuration issue but took some time to find due to the sudden intermittent failure on R82 ..&lt;/P&gt;&lt;P&gt;Implemented the following NAT-rules to test and currently been running stable for 9 hours with no icmp-unreachable messages in the logs, so fingers crossed this solves it permanently &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="signal-2026-04-12-062112_004.png" style="width: 999px;"&gt;&lt;img src="https://community.checkpoint.com/t5/image/serverpage/image-id/34002iF5017C351E396318/image-size/large?v=v2&amp;amp;px=999" role="button" title="signal-2026-04-12-062112_004.png" alt="signal-2026-04-12-062112_004.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Wed, 15 Apr 2026 12:10:10 GMT</pubDate>
    <dc:creator>PetterD</dc:creator>
    <dc:date>2026-04-15T12:10:10Z</dc:date>
    <item>
      <title>Cloudguard R82  Site2Site intermittent failures after upgrade to R82.</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Cloudguard-R82-Site2Site-intermittent-failures-after-upgrade-to/m-p/275264#M45972</link>
      <description>&lt;P&gt;I have a strange VPN Issue with Cloudguard R82.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;Enviroment:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Azure: Cloudguard R82 T60+time fix (also tried T91) single gateway, 2 cores.&lt;BR /&gt;HQ: 6000 Appliance with R81.20 + SmartCenter R82 T91 (includes time-fix)&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;STRONG&gt;Setup/Changes&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;VPN Community between Azure Cloudguard and HQ Gatway been running fine for years.&lt;BR /&gt;Yesterday we upgraded SmartCenter to R82 and deployed new Cloudguard FW on R82.&lt;/P&gt;&lt;P&gt;Both gateways managed by the same SmartCenter.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Issue:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;After installing a R82 Cloudguard, establishing SIC+license and policy push&lt;/P&gt;&lt;P&gt;IPSec VPN would just not work. Cloudguard R82 FW was sending "port unreachable" messages back to HQ FW.&lt;BR /&gt;I did a cpstop;cpstart on Cloudguard FW, then VPN was established but only for some networks.&lt;/P&gt;&lt;P&gt;At HQ we have a list of /24 networks only. On Cloudguard, we have one /16 network in encdomain.&lt;BR /&gt;However tunnel was established for various /30,/32,/28,/29 networks.. (supernetting in reverse).&lt;/P&gt;&lt;P&gt;Changed to "One VPN tunnel per gateway" on Community which seemed to work fine.&lt;BR /&gt;Then after a few hours, vpn stopped working again.. SA`s were up but "vpn tu tlist" showed tunnel as down.&lt;/P&gt;&lt;P&gt;A bunch of "port unreachable again" from Azure FW. Tried a "vpn tu" to reset tunnel with no change..&lt;BR /&gt;Did another cpstop;cpstart and it came up.. worked for 7-8 hours and it was down again.. for 45 mins until it suddently worked.&lt;BR /&gt;(We do have some reports that in these 7-8 hours there were several periods of 1,5,10-30 minutes of packetloss aswell)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;This is the output of "vpn tu tlist" when issue is present, looks the same on both sides..&lt;BR /&gt;Then after 1,5,10,30 minutes its connected again..&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;+-----------------------------------------+----------------------------------+---------------------+&lt;BR /&gt;| Peer: IP-IN-AZURE - FWAzure | MSA: 7fe6e4631258 | i: 0 ref: 15 |&lt;BR /&gt;| Methods: ESP Tunnel AES-GCM-256 | | i: 1 ref: 15 |&lt;BR /&gt;| My TS: 0.0.0.0/0 | | i: 2 ref: 19 |&lt;BR /&gt;| Peer TS: 0.0.0.0/0 | | |&lt;BR /&gt;| MSPI: 1000298 (i: 2, p: 0, d: 0) | No outbound SPI | |&lt;BR /&gt;| Tunnel created: | NAT-T | |&lt;BR /&gt;| Tunnel expiration: | Disconnected | |&lt;BR /&gt;+-----------------------------------------+----------------------------------+---------------------+&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Already have a TAC-case and several remote sessions already. Currently waiting for it to occur again to gather even more debugs.&lt;SPAN&gt;Only change was R82 Management + R82 Cloudguard. HQ FW have several other tunnels working just fine.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Anyone else experienced something like this?&lt;BR /&gt;The suspect here is definately the R82 Cloudguard..&lt;/P&gt;</description>
      <pubDate>Fri, 10 Apr 2026 11:45:25 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Cloudguard-R82-Site2Site-intermittent-failures-after-upgrade-to/m-p/275264#M45972</guid>
      <dc:creator>PetterD</dc:creator>
      <dc:date>2026-04-10T11:45:25Z</dc:date>
    </item>
    <item>
      <title>Re: Cloudguard R82  Site2Site intermittent failures after upgrade to R82.</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Cloudguard-R82-Site2Site-intermittent-failures-after-upgrade-to/m-p/275267#M45973</link>
      <description>&lt;P&gt;Check if you happen to see anything aligned to&amp;nbsp;&lt;SPAN&gt;sk184507?&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 10 Apr 2026 12:42:09 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Cloudguard-R82-Site2Site-intermittent-failures-after-upgrade-to/m-p/275267#M45973</guid>
      <dc:creator>Chris_Atkinson</dc:creator>
      <dc:date>2026-04-10T12:42:09Z</dc:date>
    </item>
    <item>
      <title>Re: Cloudguard R82  Site2Site intermittent failures after upgrade to R82.</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Cloudguard-R82-Site2Site-intermittent-failures-after-upgrade-to/m-p/275269#M45974</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;Thanks for the tip!&lt;/P&gt;&lt;P&gt;I did check sk184507 but found no coredumps in /var/log/dump/usermode and no errormessage runnning ike-debug.&lt;BR /&gt;I also checked the iked process and looks like its last restart was yesterday during the latest cpstop;cpstart &lt;span class="lia-unicode-emoji" title=":confused_face:"&gt;😕&lt;/span&gt;&lt;/P&gt;&lt;P&gt;###########&lt;BR /&gt;[Expert@fwcpr82:0]# ps x|grep iked&lt;BR /&gt;4801 ? SLl 1:15 iked 0&lt;BR /&gt;32486 pts/1 S+ 0:00 grep --color=auto iked&lt;BR /&gt;[Expert@fwcpr82:0]# ps -p 4801 -o lstart=&lt;/P&gt;&lt;P&gt;Thu Apr 9 21:12:15 2026&lt;BR /&gt;[Expert@fwcpr82:0]#&lt;/P&gt;&lt;P&gt;########&lt;BR /&gt;&lt;BR /&gt;Extremely frustrating with these&amp;nbsp; intermittent failures, never&amp;nbsp; know when the tunnel goes down and&amp;nbsp; for how long so pretty hard to investigate &lt;span class="lia-unicode-emoji" title=":confused_face:"&gt;😕&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 10 Apr 2026 12:59:11 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Cloudguard-R82-Site2Site-intermittent-failures-after-upgrade-to/m-p/275269#M45974</guid>
      <dc:creator>PetterD</dc:creator>
      <dc:date>2026-04-10T12:59:11Z</dc:date>
    </item>
    <item>
      <title>Re: Cloudguard R82  Site2Site intermittent failures after upgrade to R82.</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Cloudguard-R82-Site2Site-intermittent-failures-after-upgrade-to/m-p/275290#M45975</link>
      <description>&lt;P&gt;During the occurences&amp;nbsp; (which happens infrequently,&amp;nbsp; lasting 1-3-5-10-30 minutes) we are&amp;nbsp; observing packet&amp;nbsp; drops&amp;nbsp; on&amp;nbsp; the&amp;nbsp; Azure&amp;nbsp; FW&amp;nbsp; on&amp;nbsp; NAT-TRAVERSAL&amp;nbsp; packets&amp;nbsp; from the&amp;nbsp; On Prem FW&lt;BR /&gt;&lt;BR /&gt;[Expert@AZUREFW:0]# fw ctl zdebug + drop |grep ONPREMISEFWIP&lt;BR /&gt;@;73289377.32700;[vs_0];[tid_0];[fw4_0];fw_log_drop_ex: Packet proto=17 ONPREMISEFWIP:4500 -&amp;gt;AZUREFWWANIP:4500 dropped by fw_handle_first_packet Reason: fwconn_key_init_links (INBOUND) failed;&lt;BR /&gt;@;73289387.32736;[vs_0];[tid_0];[fw4_0];fw_log_drop_ex: Packet proto=17 ONPREMISEFWIP:4500 -&amp;gt;AZUREFWWANIP:4500 dropped by fw_handle_first_packet Reason: fwconn_key_init_links (INBOUND) failed;&lt;BR /&gt;@;73289623.33069;[vs_0];[tid_0];[fw4_0];fw_log_drop_ex: Packet proto=17 ONPREMISEFWIP:4500 -&amp;gt;AZUREFWWANIP:4500 dropped by fw_handle_first_packet Reason: fwconn_key_init_links (INBOUND) failed;&lt;BR /&gt;@;73289640.33114;[vs_0];[tid_0];[fw4_0];fw_log_drop_ex: Packet proto=17 ONPREMISEFWIP:4500 -&amp;gt;AZUREFWWANIP:4500 dropped by fw_handle_first_packet Reason: fwconn_key_init_links (INBOUND) failed;&lt;BR /&gt;@;73289696.33197;[vs_0];[tid_0];[fw4_0];fw_log_drop_ex: Packet proto=17 ONPREMISEFWIP:4500 -&amp;gt;AZUREFWWANIP:4500 dropped by fw_handle_first_packet Reason: fwconn_key_init_links (INBOUND) failed;&lt;BR /&gt;@;73289703.33207;[vs_0];[tid_0];[fw4_0];fw_log_drop_ex: Packet proto=17 ONPREMISEFWIP:4500 -&amp;gt;AZUREFWWANIP:4500 dropped by fw_handle_first_packet Reason: fwconn_key_init_links (INBOUND) failed;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Im seeing this drop other places related to NAT. Since the&amp;nbsp; Azure Firewall does not actually have a public IP on the gateway in the&amp;nbsp; topology, the public IP is manually defined&amp;nbsp; under&amp;nbsp; "statically nated" on the object.&amp;nbsp; Which has been working fine for years,&amp;nbsp; until&amp;nbsp; R82..&lt;/P&gt;</description>
      <pubDate>Fri, 10 Apr 2026 17:17:02 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Cloudguard-R82-Site2Site-intermittent-failures-after-upgrade-to/m-p/275290#M45975</guid>
      <dc:creator>PetterD</dc:creator>
      <dc:date>2026-04-10T17:17:02Z</dc:date>
    </item>
    <item>
      <title>Re: Cloudguard R82  Site2Site intermittent failures after upgrade to R82.</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Cloudguard-R82-Site2Site-intermittent-failures-after-upgrade-to/m-p/275319#M45976</link>
      <description>&lt;P&gt;Looks like we may finally have found the root cause of the issue.. &lt;STRONG&gt;sk163835&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;After running "fw ctl zdebug + conn drop vm link nat xlate xltrc", TAC spot`ed that the NAT-TRAVERSAL packets from the Azure gateway (line 5, bold) were having the sourceport nated..&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;EM&gt;@;198991747.74218;[vs_0];[tid_0];[fw4_0];fwx_get_xlbuf: SRV xlation buffer found for request: vmside=1, cli-&amp;gt;srv(1);&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;@;198991747.74219;[vs_0];[tid_0];[fw4_0];fwx_get_xldata: got (172.18.254.6,35d9,0.0.0.0,0 : 0) flags = 220, cli-&amp;gt;serv (1);&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;@;198991747.74220;[vs_0];[tid_0];[fw4_0];fw_xlate_packet: connection &amp;lt;dir 1, 172.18.254.6:4500 -&amp;gt; ONPREMISEGATEWAYIP:4500 IPP 17&amp;gt;, OUTBOUND(1);&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;@;198991747.74221;[vs_0];[tid_0];[fw4_0];fw_xlate: changing &amp;lt;dir 1, 172.18.254.6:4500 -&amp;gt; ONPREMISEGATEWAYIP:4500 IPP 17&amp;gt; to &amp;lt;dir 0, 172.18.254.6:13785 -&amp;gt; ONPREMISEGATEWAYIP:4500 IPP 17&amp;gt;;&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;@;198991747.74222;[vs_0];[tid_0];[fw4_0];After POST VM: &amp;lt;dir 1, &lt;STRONG&gt;172.18.254.6:13785&lt;/STRONG&gt; -&amp;gt; ONPREMISEGATEWAYIP:4500 IPP 17&amp;gt; (len=204) ;&lt;/EM&gt;&lt;BR /&gt;&lt;EM&gt;@;198991747.74223;[vs_0];[tid_0];[fw4_0];POST VM Final action=ACCEPT;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;The Azure VNET is 172.18.0.0/16 and the FrontendSubnet is part of this VNET.&lt;BR /&gt;The last HIDE-nat rule in the rulebase was hiding 172.18.0.0/16 behind the gateway.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;This took a long time to find out, was working fine up to 7 hours on the most, and have been running fine for years on R81x, but after deploying R82 this suddently started getting messy. So definately a configuration issue but took some time to find due to the sudden intermittent failure on R82 ..&lt;/P&gt;&lt;P&gt;Implemented the following NAT-rules to test and currently been running stable for 9 hours with no icmp-unreachable messages in the logs, so fingers crossed this solves it permanently &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="signal-2026-04-12-062112_004.png" style="width: 999px;"&gt;&lt;img src="https://community.checkpoint.com/t5/image/serverpage/image-id/34002iF5017C351E396318/image-size/large?v=v2&amp;amp;px=999" role="button" title="signal-2026-04-12-062112_004.png" alt="signal-2026-04-12-062112_004.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 15 Apr 2026 12:10:10 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Cloudguard-R82-Site2Site-intermittent-failures-after-upgrade-to/m-p/275319#M45976</guid>
      <dc:creator>PetterD</dc:creator>
      <dc:date>2026-04-15T12:10:10Z</dc:date>
    </item>
    <item>
      <title>Re: Cloudguard R82  Site2Site intermittent failures after upgrade to R82.</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Cloudguard-R82-Site2Site-intermittent-failures-after-upgrade-to/m-p/275331#M45977</link>
      <description>&lt;P&gt;please keep us posted - running into similar issues with a cloudguard gateway.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 12 Apr 2026 20:44:04 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Cloudguard-R82-Site2Site-intermittent-failures-after-upgrade-to/m-p/275331#M45977</guid>
      <dc:creator>CarlosCP</dc:creator>
      <dc:date>2026-04-12T20:44:04Z</dc:date>
    </item>
    <item>
      <title>Re: Cloudguard R82  Site2Site intermittent failures after upgrade to R82.</title>
      <link>https://community.checkpoint.com/t5/General-Topics/Cloudguard-R82-Site2Site-intermittent-failures-after-upgrade-to/m-p/275371#M45980</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;See my last post, issue was resolved by creating no-nat rules for ike/nat-t from the gayeways wan IP in the Frontend subnet!&lt;/P&gt;&lt;P&gt;Having a hide-nat for the whole vnet that includes the Frontend subnet (and therefore also the cloudguard) was a tripwire that started causing issues after the upgrade. Took some time (and pain) to figure out;)&lt;/P&gt;</description>
      <pubDate>Mon, 13 Apr 2026 18:35:55 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/General-Topics/Cloudguard-R82-Site2Site-intermittent-failures-after-upgrade-to/m-p/275371#M45980</guid>
      <dc:creator>PetterD</dc:creator>
      <dc:date>2026-04-13T18:35:55Z</dc:date>
    </item>
  </channel>
</rss>

