<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Lakera Bulletin - This Week in AI #51: The Week AI Security Cracked Open in AI Agents Security</title>
    <link>https://community.checkpoint.com/t5/AI-Agents-Security/Lakera-Bulletin-This-Week-in-AI-51-The-Week-AI-Security-Cracked/m-p/274846#M78</link>
    <description>&lt;P data-end="278" data-start="13"&gt;&lt;SPAN&gt;It’s been a week dominated by one theme: AI security is breaking into the open. From silent data exfiltration to leaked model code and growing concern over autonomous cyber capabilities, the gap between AI power and AI protection is becoming impossible to ignore.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P data-end="300" data-start="280"&gt;&lt;SPAN&gt;Let’s get into it.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 data-end="352" data-start="302"&gt;&lt;SPAN&gt;OpenAI Patches Stealth Data Exfiltration Flaw&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-end="807" data-start="353"&gt;&lt;SPAN&gt;Check Point Research discovered a new vulnerability that&amp;nbsp;allowed attackers to extract sensitive data from ChatGPT via DNS queries,&amp;nbsp;without users ever noticing. It shows how prompt injection risks can extend beyond the model itself into hidden system channels.&lt;/SPAN&gt;&lt;BR data-end="590" data-start="587" /&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VW5Ht-46vWn3W2ww8Wn16P5FjW93kvSZ5Ms0m8N7WbrxP7mt9PW5BWSxg6lZ3l6N9h0VxZ-t-dHW7zsyHC114ZL3W4X413Y3ftTGXW6rGX9D8hJdK8N9gbwJ2Xdh--W50W9mc5LNtSMW3B1Jlk6LtzvGW80pJ_01FQpKTW6_1V0q7wxYk_W4GY15T4M291TW7pDmYj7PMJmwW4f9lmn5HSZ8JW35j_Gv2y0BGMW4_dl1J62L9R8W3vjNbN8zH7RSW7qrh8v7C17gSVtzczf3HRxzXW3s0Rnc35c8j7N4Qs6NwcyqcnW461PNX84Hx9hW5q-DNX4JgZ48W7JxV0734HBK9W8hmhJK1K13x1W2MrFKr2XWjF5W4kwnbt7NHvWJW8m32_n3tRnQRW6Mx_kj1MqBBzW6zggnh1G_l1_W93V00M6d4-s4W2xT8PV8QbfDlW5WQ5FT6PxjjYW4Lmpcc93HB0pV31Rg58Rkk69N85jY5H4XSV8W9kFSXQ3B3tvTW9cbrzW6Yhd71W1NvYxb5ww-2BW4Szlc-8Vb652W678Ljx1QVtzHW8MQfxN4zm0_bW3P2hTv7L6vBKW15syD61ZV57xMGN5HccF0h0VdyBr-9fcRnLW3h7dlm1H-jbrVQW8dN8Zz_ZqW7HNgl766mpm9W6Sv6w65H8nrBW7mxY8L5VxPCfW6njLd08Tq-Rjf7BFrWH04" rel="noopener" data-hs-link-id-v2="6JalBVbK" data-hs-link-id="0" data-end="805" data-start="593" target="_blank"&gt;Read the vulnerability breakdown&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 data-end="854" data-start="809"&gt;&lt;SPAN&gt;Anthropic Accidentally Leaks Claude Code&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-end="1198" data-start="855"&gt;&lt;SPAN&gt;Anthropic exposed roughly 500,000 lines of internal code for its AI coding agent due to a packaging mistake. While no user data was compromised, the incident raises concerns about supply-chain security and how easily sensitive AI systems can be exposed.&lt;/SPAN&gt;&lt;BR data-end="1111" data-start="1108" /&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VW5Ht-46vWn3W2ww8Wn16P5FjW93kvSZ5Ms0m8N7WbrxP5nXHCW5BWr2F6lZ3mYN24lTnlBsKn7W7vmg_F8--D2zVMGbP93VM_zGW5m3pz36hSRnFW2dXFvr6k8CjkW52My8l7zKn1-W2SGq178RCCZgW95C56M1JCpS2W1Y1k2v1J-t_RW8gFfrL1gD4X_W8mKbSs2W9tZ8W1YFqvV7kB05TW64_TKG8-0_-VW2ryzR3799V2yW1GrMYM12LDqRW5bz_xn306ScZW1krc9Z3nW4J4W6MgLmw3SmCXfW8_RQdv7hn8JjW5mX02X5CZjQCW8VK1VW1yFsh0W3gpdGP2PBS79W8K8PM51rFVgDW4Rm9Hg24-FyPW7BKWWL745SqjW94jncB8CjzlpW3R51QT6bgjNJW1wSJqp6vQFMkW3RC70q2R6rBmW5l3qFX1-HLqVW35C1Xv6HSWwsW6Ky_bT1h8Wt1Mc3v834BNtQW7VJpH688vxDlf3svmjH04" rel="noopener" data-hs-link-id-v2="oIyt3VFo" data-hs-link-id="0" data-end="1196" data-start="1114" target="_blank"&gt;Read the full report&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 data-end="1261" data-start="1200"&gt;&lt;SPAN&gt;AI Model Capable of Autonomous Cyberattacks Raises Alarm&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-end="1609" data-start="1262"&gt;&lt;SPAN&gt;Reports suggest Anthropic has developed a model capable of independently carrying out sophisticated cyberattacks, prompting briefings with government officials. The development signals a shift toward AI systems that can act as fully autonomous offensive actors.&lt;/SPAN&gt;&lt;BR data-end="1526" data-start="1523" /&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VW5Ht-46vWn3W2ww8Wn16P5FjW93kvSZ5Ms0m8N7WbrxP5nXHCW5BWr2F6lZ3p1W4tBYQm3V5ZC_W8qWbb67l7yrhW5bsbR55SHh0hW97Fzs_29xGnXW3vYMNT9kHSdFW267trc4ZKjV0W6STTXW8JzsV5W5cb-7B2-B7lBW5j_2n53CHpnTMhP4Ys2dLwgW1t_SGf8q6cwRW3R_4S0526jNKN7T68JTWhRBtW8mbF7H5TWpqjW21t57F6TdjYNW5Ys6vJ7dQ-6tW1NNr2g7504QTW3Bt-Tf85KKBMW5VDlxR4Q_z34W8TRydw5G5lrbN3tLZJ09xTdHW6S54qd1HgxqgW96jJWH5bnGYdW89rBfK4M5HM_W8y4MgS3xfJT0W76GCBL6rNMcfW4HJWpZ3C90v5VN00PZ2j2ZX_W7-SDsN864wlrW3pVrGY3JRb-hW12zSD63t9mb9W1yjy561YK6CKW7cM3hz816zmHN8zSlmfd1Vjmf9lPR1P04" rel="noopener" data-hs-link-id-v2="QvocyzyE" data-hs-link-id="0" data-end="1607" data-start="1529" target="_blank"&gt;Read the Axios report&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 data-end="1679" data-start="1611"&gt;&lt;SPAN&gt;Apple Rolls Out Emergency Protections Against DarkSword Exploit&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-end="2021" data-start="1680"&gt;&lt;SPAN&gt;Apple released security updates to protect devices against the DarkSword exploit kit, reportedly used by spyware vendors and state actors. The move highlights how quickly real-world threats are evolving alongside AI-assisted attack techniques.&lt;/SPAN&gt;&lt;BR data-end="1926" data-start="1923" /&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VW5Ht-46vWn3W2ww8Wn16P5FjW93kvSZ5Ms0m8N7WbrxP5nXHCW5BWr2F6lZ3pBW1Mfv1C66T5ytW6Bc7JL7mkGBCW1zRYt_6Sw12ZW6vrYG09cbw21VRxwjV1gvDMmW1rxBZd3q2qGwW4pGg9S582N9MW8y5Wfm18fmxsW7hVDlX8QBqlhVGZlhw4BLZlvW6V07cW4By77MW1YW7BG2L_QdHVwHwpx2L-FM4N4nWJzxdpQy4W48NNbC2ZvnZqMmCJyvtTWXJW5bfB1K8d7QjYN4q5w-3ZNky0N8xTGb-HrnpSW51742l8Pq8xpW3M2hwY1rpPvRN49JHBVRZjY3W8J5wmC308gWdW65-YZv2jMSxMW3z4nGL1YwmwtW7hM68S8_mVxKW40dZGh5J5LMRW5pwmbW2VH8DfW377Sg786XWpXW2Wcs8c4nv371W23s_9N6StS4BVfGVzq5SLK8gW66PbJw45XyJxW55sC_t5_9Q4df7P6ntv04" rel="noopener" data-hs-link-id-v2="1s7IDOK2" data-hs-link-id="0" data-end="2019" data-start="1929" target="_blank"&gt;Read the security update details&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 data-end="2072" data-start="2023"&gt;&lt;SPAN&gt;Open-Source AI Is Creating New Security Debt&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-end="2403" data-start="2073"&gt;&lt;SPAN&gt;Experts warn that rapid adoption of open-source AI components is introducing hidden vulnerabilities across enterprise systems. Without proper oversight, organizations risk accumulating “security debt” that becomes increasingly difficult to manage.&lt;/SPAN&gt;&lt;BR data-end="2323" data-start="2320" /&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VW5Ht-46vWn3W2ww8Wn16P5FjW93kvSZ5Ms0m8N7Wbrxv5nXHCW50kH_H6lZ3mMW6cc3tW8gYRNYVlkcvb26slgxW3BXqlL7z-GBdW3lpM0y27xG69N2YCXy73BtZjW3sQpmh6FbGfKW8v00pH1BvWhGW4NrkbJ2G7jlgW4m8TDk7bQm0nMHRWrf46qCnW6pjXmP4yqCj2W6xFGw_7wWX23N8B-BZP8KS7rW2tJl-G6XGmYlW1kzfD97d78qJVrL-jK7bMw5sW2BLL3V23Sy7PTPT-j5MH41RW4fNpTH5G_DtBW46VWkq5Y1nwdW90C-bb5nMjpmW5mMfmq7ymvhBW4HhhSz6DwkqKN75110NcMQzcW76yJqq5qb_7sW81V7yL1KlFklW7Xbwnk8GK18TW3skLLz7JQWwdW7hDS3y68dDCVW4FwXsY4TsJyGMN1m4P-xStWW30372R8tXdv5f77Cwzj04" rel="noopener" data-hs-link-id-v2="m6odF1ll" data-hs-link-id="0" target="_blank"&gt;Explore the analysis&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P data-end="2561" data-start="2405"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-end="2561" data-start="2405"&gt;&lt;SPAN&gt;From silent leaks to autonomous attack capabilities, the trajectory is clear: AI systems are becoming more powerful, and more exposed,&amp;nbsp;at the same time.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P data-end="3659" data-start="3639"&gt;&lt;SPAN&gt;See you next week!&lt;/SPAN&gt;&lt;/P&gt;</description>
    <pubDate>Sat, 04 Apr 2026 08:42:13 GMT</pubDate>
    <dc:creator>_Val_</dc:creator>
    <dc:date>2026-04-04T08:42:13Z</dc:date>
    <item>
      <title>Lakera Bulletin - This Week in AI #51: The Week AI Security Cracked Open</title>
      <link>https://community.checkpoint.com/t5/AI-Agents-Security/Lakera-Bulletin-This-Week-in-AI-51-The-Week-AI-Security-Cracked/m-p/274846#M78</link>
      <description>&lt;P data-end="278" data-start="13"&gt;&lt;SPAN&gt;It’s been a week dominated by one theme: AI security is breaking into the open. From silent data exfiltration to leaked model code and growing concern over autonomous cyber capabilities, the gap between AI power and AI protection is becoming impossible to ignore.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P data-end="300" data-start="280"&gt;&lt;SPAN&gt;Let’s get into it.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 data-end="352" data-start="302"&gt;&lt;SPAN&gt;OpenAI Patches Stealth Data Exfiltration Flaw&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-end="807" data-start="353"&gt;&lt;SPAN&gt;Check Point Research discovered a new vulnerability that&amp;nbsp;allowed attackers to extract sensitive data from ChatGPT via DNS queries,&amp;nbsp;without users ever noticing. It shows how prompt injection risks can extend beyond the model itself into hidden system channels.&lt;/SPAN&gt;&lt;BR data-end="590" data-start="587" /&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VW5Ht-46vWn3W2ww8Wn16P5FjW93kvSZ5Ms0m8N7WbrxP7mt9PW5BWSxg6lZ3l6N9h0VxZ-t-dHW7zsyHC114ZL3W4X413Y3ftTGXW6rGX9D8hJdK8N9gbwJ2Xdh--W50W9mc5LNtSMW3B1Jlk6LtzvGW80pJ_01FQpKTW6_1V0q7wxYk_W4GY15T4M291TW7pDmYj7PMJmwW4f9lmn5HSZ8JW35j_Gv2y0BGMW4_dl1J62L9R8W3vjNbN8zH7RSW7qrh8v7C17gSVtzczf3HRxzXW3s0Rnc35c8j7N4Qs6NwcyqcnW461PNX84Hx9hW5q-DNX4JgZ48W7JxV0734HBK9W8hmhJK1K13x1W2MrFKr2XWjF5W4kwnbt7NHvWJW8m32_n3tRnQRW6Mx_kj1MqBBzW6zggnh1G_l1_W93V00M6d4-s4W2xT8PV8QbfDlW5WQ5FT6PxjjYW4Lmpcc93HB0pV31Rg58Rkk69N85jY5H4XSV8W9kFSXQ3B3tvTW9cbrzW6Yhd71W1NvYxb5ww-2BW4Szlc-8Vb652W678Ljx1QVtzHW8MQfxN4zm0_bW3P2hTv7L6vBKW15syD61ZV57xMGN5HccF0h0VdyBr-9fcRnLW3h7dlm1H-jbrVQW8dN8Zz_ZqW7HNgl766mpm9W6Sv6w65H8nrBW7mxY8L5VxPCfW6njLd08Tq-Rjf7BFrWH04" rel="noopener" data-hs-link-id-v2="6JalBVbK" data-hs-link-id="0" data-end="805" data-start="593" target="_blank"&gt;Read the vulnerability breakdown&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 data-end="854" data-start="809"&gt;&lt;SPAN&gt;Anthropic Accidentally Leaks Claude Code&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-end="1198" data-start="855"&gt;&lt;SPAN&gt;Anthropic exposed roughly 500,000 lines of internal code for its AI coding agent due to a packaging mistake. While no user data was compromised, the incident raises concerns about supply-chain security and how easily sensitive AI systems can be exposed.&lt;/SPAN&gt;&lt;BR data-end="1111" data-start="1108" /&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VW5Ht-46vWn3W2ww8Wn16P5FjW93kvSZ5Ms0m8N7WbrxP5nXHCW5BWr2F6lZ3mYN24lTnlBsKn7W7vmg_F8--D2zVMGbP93VM_zGW5m3pz36hSRnFW2dXFvr6k8CjkW52My8l7zKn1-W2SGq178RCCZgW95C56M1JCpS2W1Y1k2v1J-t_RW8gFfrL1gD4X_W8mKbSs2W9tZ8W1YFqvV7kB05TW64_TKG8-0_-VW2ryzR3799V2yW1GrMYM12LDqRW5bz_xn306ScZW1krc9Z3nW4J4W6MgLmw3SmCXfW8_RQdv7hn8JjW5mX02X5CZjQCW8VK1VW1yFsh0W3gpdGP2PBS79W8K8PM51rFVgDW4Rm9Hg24-FyPW7BKWWL745SqjW94jncB8CjzlpW3R51QT6bgjNJW1wSJqp6vQFMkW3RC70q2R6rBmW5l3qFX1-HLqVW35C1Xv6HSWwsW6Ky_bT1h8Wt1Mc3v834BNtQW7VJpH688vxDlf3svmjH04" rel="noopener" data-hs-link-id-v2="oIyt3VFo" data-hs-link-id="0" data-end="1196" data-start="1114" target="_blank"&gt;Read the full report&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 data-end="1261" data-start="1200"&gt;&lt;SPAN&gt;AI Model Capable of Autonomous Cyberattacks Raises Alarm&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-end="1609" data-start="1262"&gt;&lt;SPAN&gt;Reports suggest Anthropic has developed a model capable of independently carrying out sophisticated cyberattacks, prompting briefings with government officials. The development signals a shift toward AI systems that can act as fully autonomous offensive actors.&lt;/SPAN&gt;&lt;BR data-end="1526" data-start="1523" /&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VW5Ht-46vWn3W2ww8Wn16P5FjW93kvSZ5Ms0m8N7WbrxP5nXHCW5BWr2F6lZ3p1W4tBYQm3V5ZC_W8qWbb67l7yrhW5bsbR55SHh0hW97Fzs_29xGnXW3vYMNT9kHSdFW267trc4ZKjV0W6STTXW8JzsV5W5cb-7B2-B7lBW5j_2n53CHpnTMhP4Ys2dLwgW1t_SGf8q6cwRW3R_4S0526jNKN7T68JTWhRBtW8mbF7H5TWpqjW21t57F6TdjYNW5Ys6vJ7dQ-6tW1NNr2g7504QTW3Bt-Tf85KKBMW5VDlxR4Q_z34W8TRydw5G5lrbN3tLZJ09xTdHW6S54qd1HgxqgW96jJWH5bnGYdW89rBfK4M5HM_W8y4MgS3xfJT0W76GCBL6rNMcfW4HJWpZ3C90v5VN00PZ2j2ZX_W7-SDsN864wlrW3pVrGY3JRb-hW12zSD63t9mb9W1yjy561YK6CKW7cM3hz816zmHN8zSlmfd1Vjmf9lPR1P04" rel="noopener" data-hs-link-id-v2="QvocyzyE" data-hs-link-id="0" data-end="1607" data-start="1529" target="_blank"&gt;Read the Axios report&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 data-end="1679" data-start="1611"&gt;&lt;SPAN&gt;Apple Rolls Out Emergency Protections Against DarkSword Exploit&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-end="2021" data-start="1680"&gt;&lt;SPAN&gt;Apple released security updates to protect devices against the DarkSword exploit kit, reportedly used by spyware vendors and state actors. The move highlights how quickly real-world threats are evolving alongside AI-assisted attack techniques.&lt;/SPAN&gt;&lt;BR data-end="1926" data-start="1923" /&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VW5Ht-46vWn3W2ww8Wn16P5FjW93kvSZ5Ms0m8N7WbrxP5nXHCW5BWr2F6lZ3pBW1Mfv1C66T5ytW6Bc7JL7mkGBCW1zRYt_6Sw12ZW6vrYG09cbw21VRxwjV1gvDMmW1rxBZd3q2qGwW4pGg9S582N9MW8y5Wfm18fmxsW7hVDlX8QBqlhVGZlhw4BLZlvW6V07cW4By77MW1YW7BG2L_QdHVwHwpx2L-FM4N4nWJzxdpQy4W48NNbC2ZvnZqMmCJyvtTWXJW5bfB1K8d7QjYN4q5w-3ZNky0N8xTGb-HrnpSW51742l8Pq8xpW3M2hwY1rpPvRN49JHBVRZjY3W8J5wmC308gWdW65-YZv2jMSxMW3z4nGL1YwmwtW7hM68S8_mVxKW40dZGh5J5LMRW5pwmbW2VH8DfW377Sg786XWpXW2Wcs8c4nv371W23s_9N6StS4BVfGVzq5SLK8gW66PbJw45XyJxW55sC_t5_9Q4df7P6ntv04" rel="noopener" data-hs-link-id-v2="1s7IDOK2" data-hs-link-id="0" data-end="2019" data-start="1929" target="_blank"&gt;Read the security update details&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 data-end="2072" data-start="2023"&gt;&lt;SPAN&gt;Open-Source AI Is Creating New Security Debt&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-end="2403" data-start="2073"&gt;&lt;SPAN&gt;Experts warn that rapid adoption of open-source AI components is introducing hidden vulnerabilities across enterprise systems. Without proper oversight, organizations risk accumulating “security debt” that becomes increasingly difficult to manage.&lt;/SPAN&gt;&lt;BR data-end="2323" data-start="2320" /&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VW5Ht-46vWn3W2ww8Wn16P5FjW93kvSZ5Ms0m8N7Wbrxv5nXHCW50kH_H6lZ3mMW6cc3tW8gYRNYVlkcvb26slgxW3BXqlL7z-GBdW3lpM0y27xG69N2YCXy73BtZjW3sQpmh6FbGfKW8v00pH1BvWhGW4NrkbJ2G7jlgW4m8TDk7bQm0nMHRWrf46qCnW6pjXmP4yqCj2W6xFGw_7wWX23N8B-BZP8KS7rW2tJl-G6XGmYlW1kzfD97d78qJVrL-jK7bMw5sW2BLL3V23Sy7PTPT-j5MH41RW4fNpTH5G_DtBW46VWkq5Y1nwdW90C-bb5nMjpmW5mMfmq7ymvhBW4HhhSz6DwkqKN75110NcMQzcW76yJqq5qb_7sW81V7yL1KlFklW7Xbwnk8GK18TW3skLLz7JQWwdW7hDS3y68dDCVW4FwXsY4TsJyGMN1m4P-xStWW30372R8tXdv5f77Cwzj04" rel="noopener" data-hs-link-id-v2="m6odF1ll" data-hs-link-id="0" target="_blank"&gt;Explore the analysis&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P data-end="2561" data-start="2405"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-end="2561" data-start="2405"&gt;&lt;SPAN&gt;From silent leaks to autonomous attack capabilities, the trajectory is clear: AI systems are becoming more powerful, and more exposed,&amp;nbsp;at the same time.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P data-end="3659" data-start="3639"&gt;&lt;SPAN&gt;See you next week!&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 04 Apr 2026 08:42:13 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/AI-Agents-Security/Lakera-Bulletin-This-Week-in-AI-51-The-Week-AI-Security-Cracked/m-p/274846#M78</guid>
      <dc:creator>_Val_</dc:creator>
      <dc:date>2026-04-04T08:42:13Z</dc:date>
    </item>
    <item>
      <title>Re: Lakera Bulletin - This Week in AI #51: The Week AI Security Cracked Open</title>
      <link>https://community.checkpoint.com/t5/AI-Agents-Security/Lakera-Bulletin-This-Week-in-AI-51-The-Week-AI-Security-Cracked/m-p/274925#M79</link>
      <description>&lt;P&gt;Nice&lt;/P&gt;</description>
      <pubDate>Mon, 06 Apr 2026 15:52:25 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/AI-Agents-Security/Lakera-Bulletin-This-Week-in-AI-51-The-Week-AI-Security-Cracked/m-p/274925#M79</guid>
      <dc:creator>Lars_Roerll</dc:creator>
      <dc:date>2026-04-06T15:52:25Z</dc:date>
    </item>
    <item>
      <title>Re: Lakera Bulletin - This Week in AI #51: The Week AI Security Cracked Open</title>
      <link>https://community.checkpoint.com/t5/AI-Agents-Security/Lakera-Bulletin-This-Week-in-AI-51-The-Week-AI-Security-Cracked/m-p/275025#M82</link>
      <description>&lt;P&gt;Awesome!!&lt;/P&gt;</description>
      <pubDate>Tue, 07 Apr 2026 20:25:22 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/AI-Agents-Security/Lakera-Bulletin-This-Week-in-AI-51-The-Week-AI-Security-Cracked/m-p/275025#M82</guid>
      <dc:creator>sjni01</dc:creator>
      <dc:date>2026-04-07T20:25:22Z</dc:date>
    </item>
  </channel>
</rss>

