<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Lakera bulletin - This Week in AI #38 in AI Agents Security</title>
    <link>https://community.checkpoint.com/t5/AI-Agents-Security/Lakera-bulletin-This-Week-in-AI-38/m-p/265310#M37</link>
    <description>&lt;DIV dir="ltr"&gt;
&lt;DIV lang="en"&gt;
&lt;P data-start="281" data-end="657"&gt;It’s been a busy week in AI, with new models shipping fast and security questions following close behind. We saw OpenAI raise the alarm on cyber risk at the frontier, fresh vulnerabilities surface in everyday developer tools, and new guidance emerge for securing agentic systems. At the same time, governments are still debating how much oversight makes sense as capabilities continue to scale.&lt;/P&gt;
&lt;P data-start="659" data-end="677"&gt;Let’s get into it.&lt;/P&gt;
&lt;H2 data-start="659" data-end="677"&gt;&lt;SPAN&gt;OpenAI Warns New Models Pose “High” Cybersecurity Risk&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-start="659" data-end="677"&gt;&lt;SPAN&gt;OpenAI said its upcoming frontier models may significantly increase cybersecurity capabilities, including the ability to identify and exploit software vulnerabilities. The warning reflects growing concern about how quickly offensive capabilities may scale alongside model performance.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV dir="ltr"&gt;
&lt;DIV lang="en"&gt;
&lt;DIV dir="ltr"&gt;
&lt;DIV class="hse-body-background" lang="en"&gt;
&lt;P data-start="739" data-end="1124"&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VVVxJw8jZ6sjW1Kc0sC2gN9qmW5QKV2t5G_SWxN1HNy_43qn9qW95jsWP6lZ3p1W61V_Dq938YJXW4V8MJC8P-qwDW43y1fz4P8ZW6W931vxX1TQXgjVlw4yR35T6wCW2VnpdC8Nt7vSW3kH6jm6kdNf4W7jvWHs7JS7V8W7MW92d194Z5tW70FfxH4cGtHFW4tQSZY1SH69JW71-pkw472lkhW7GqQJl1GKZWQW2kK3Sz1HjhrhVG_FhT8v8y0FVdCDR_6QWGDCW2sRtBF3rD4jYW3Vqwc82B4KpwMHrc42pb0bkW3svK8S2RSZywW6XBTR01Xn3BsN9kRJd8CP8G_N2xHtjgG_-5bW1_zdn922xjPjW6XrHXp1z-Cs9W584Ww449tG5xW8xCLRd8GxxXgW86cClD7bFDDpW3-R1DN2jSZClVB9mMm5F6J04f7lP19604" target="_blank" rel="noopener" data-start="1008" data-end="1124" data-hs-link-id="0" data-hs-link-id-v2="SLMLvfuY"&gt;Read the report&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 data-start="1126" data-end="1180"&gt;&lt;SPAN&gt;OpenAI Releases GPT-5.2 After Internal “Code Red”&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-start="1181" data-end="1572"&gt;&lt;SPAN&gt;OpenAI released GPT-5.2, its most capable model yet for coding, reasoning, and multimodal work, after speeding up development in response to Google’s Gemini 3. The launch shows how competitive pressure is shaping both release timelines and risk decisions.&lt;/SPAN&gt;&lt;BR data-start="1436" data-end="1439" /&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VVVxJw8jZ6sjW1Kc0sC2gN9qmW5QKV2t5G_SWxN1HNyZC5nXHCW69t95C6lZ3mkW8Ydkz94nxvyCW8lfYqj6ncDJ3W1CNwGG1R5gz4W6Q3wlx2P9gR6W1wqZzF1pcPgVW70VKFb5wzlQzW12PsbM8bZdDzW4f5TYh5sq8zyW1MKPsV5pfpZQW93KkCX7Hlw8cW6QgSRX8tcP4SW30K9qj49cdtpW6BjkYG2Vn1YZW4zRVJz34wHGWW6bJclW2G2xxTW2LvxmL6VP0krN80Jl7Ht3xxZW5qfjP-4twlgJW8tV_xl6NzPBtW4djqg57lJYvYW1f2jTW81T9ymW29JVsM3jCkrqW7VqHsv7chDtJW2qFWbZ5fWFg8W8_CZ5M55zwcgW8P8mJN4yt1cPW7gvf6W4xRw30Mw1jckG9SHxW38NnBG2yCVy8W2hr2-m4PqXwyW6gm0bv3xCYntW6TKgdB4wN0SdW2QG6k28sZsSgW30Swqd1MxkthMh-gk9qRqyFN5DNZG6XSg2pf3kW8sC04" target="_blank" rel="noopener" data-start="1442" data-end="1572" data-hs-link-id="0" data-hs-link-id-v2="ZhTc4XEO"&gt;Read the announcement&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 data-start="1574" data-end="1643"&gt;&lt;SPAN&gt;Critical Flaws Found in AI-Powered Developer Tools (“IDEsaster”)&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-start="1644" data-end="2094"&gt;&lt;SPAN&gt;Researchers disclosed dozens of serious vulnerabilities across popular AI-assisted IDEs, enabling data theft and remote code execution through poisoned prompts and extensions. As AI tools become standard in development workflows, these findings highlight a growing attack surface.&lt;/SPAN&gt;&lt;BR data-start="1924" data-end="1927" /&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VVVxJw8jZ6sjW1Kc0sC2gN9qmW5QKV2t5G_SWxN1HNy-v5nXHCW7Y9pgv6lZ3lxTB2bt3fDYXyW7pP7QH1ZsCDfN3-TRL34gYfsW9hsnNR5HQZWmN4RvHXq615Q4N8klybJKP5cNW8-fPwV2WXts5W170QgT1vWBQxW42FkQ764R4NwW8rYRlC29vVtmW2GhHZ02mD2SZW4wG6NW1yCd2vW2G7-SP3K87mCW5d3R6p4PJnd7N6lTFKkzL7KwW6k1_Yx2ytN_WVHSdsK2bFVxtVrGVGT3KJsDSW452kcw1_4qhpW88bCT-8qXn2WW2ByXZr4hdrnRW29vg7F2dw5l2W59tg2x3bzzJKV668dS1sm3c5W7mXV-96wQH1nW2vR6F45_Sqx3W62qF-Z4FRMrlN8jpjWcl9J0jW1Qh5Ll1hrmHYW32qW5h6Fz16pW2v_-Mq2YJQkrVpSbKZ6X-7qwW7R84xF6ywlSLW9khsbg62YMwYW3Y2m_57Gk_fsW5NdJTv88j73zW9dpVsx1BGsFMW66fqK55QLWZvN8Kk64pVHf8-VDmr_Z8HVhjsVtbf1b4vkqWVW4pTS7v7TMhKYd7w9GF04" target="_blank" rel="noopener" data-start="1930" data-end="2094" data-hs-link-id="0" data-hs-link-id-v2="6BwyS9yH"&gt;Read the disclosure&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P data-start="1644" data-end="2094"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 data-start="2096" data-end="2152"&gt;&lt;SPAN&gt;OWASP Releases Top 10 Risks for Agentic AI Security&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-start="2153" data-end="2562"&gt;&lt;SPAN&gt;The OWASP GenAI Security Project published its first Top 10 list focused on agentic AI, covering risks such as agent hijacking, unsafe tool use, and excessive autonomy. It offers practical guidance for teams building or deploying autonomous systems today.&lt;/SPAN&gt;&lt;BR data-start="2408" data-end="2411" /&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VVVxJw8jZ6sjW1Kc0sC2gN9qmW5QKV2t5G_SWxN1HNy-b5nXHCW7lCGcx6lZ3p3W2ZDB1P5DzcDXW2V-3Zc37kGn9W83Rj9W3d5LM6W6j4JGs8QSHtxW2xMn997HhPxYMLXQH4grZC1V9zkS98QF0yMW6vYpgY7B64xcW4s2tjM5Y07RRW86zdMc7HFKl6W2s7Ncr5XQV4dW6VGxJp5n0Jq-W3l6ZCj1ZKNLGW8HP9y41BPfvHW6tT90p2Yl7tLW2Pr2Ks5RLdLKW15RwBc7X_cTrW6w9WjR2HjXXyW2T2TX76J9xF5W11wpPN3fJfh2W3QyQ2N8mfb2_W6_X0QZ6kkCY3VQHvtS3zZRCjW3pN5vJ1Jg0PpW6477m155Qgq9W2wGmLS16Y8hMW8Pth1T5r6PWvV9GvMR7c4__SW4f9F3t5220c7W5BmfHX7KqvS7W6W8M2g1qq6qCVH9fsL9gdkgGW6dyKBC7TWFM-VDshLJ5yRq-cW1T0sz1899rrhW9gnDLF8dBdk6VdDmf095wd0zVVsdKV89Ng5dW6J9v3V3ftjlTW7fQsm84y55pKf7fvzXl04" target="_blank" rel="noopener" data-start="2414" data-end="2562" data-hs-link-id="0" data-hs-link-id-v2="nmrLewJI"&gt;Explore the Top 10&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 data-start="2564" data-end="2625"&gt;&lt;SPAN&gt;Trump Signs Executive Order Blocking State AI Regulation&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-start="2626" data-end="2972"&gt;&lt;SPAN&gt;A new U.S. executive order prevents states from enforcing their own AI regulations, shifting authority to the federal level. Supporters argue it reduces fragmentation, while critics worry it limits meaningful safety oversight.&lt;/SPAN&gt;&lt;BR data-start="2852" data-end="2855" /&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VVVxJw8jZ6sjW1Kc0sC2gN9qmW5QKV2t5G_SWxN1HNyZC5nXHCW69t95C6lZ3pRW1_22NZ371sZ2W9cKWj66plLWZW7HnPlh38r6m1W1mwVFV6cbgw4W4d2p1D3rDW4sVHc2SN8pDFnhW4dml6514cl7TW7mDbfn4lvD7RW7czWn_95fr2hW6-sxbH20z284W1Zhn5R4jsx5BW1hvVRz5Z0_mvW6L4Jpn6MVJxzN28XZQnXVH0HW7dfsZF63sZRBW2qLy_V8yqBvlVXRBv6643qq-W8wFYZj70c-pkW2tNXW_2J-7znVlk-KT8N9YHVN39nzlR55vLqW7b_Cdr2-vXkWW6k-L2D65TC1KW2q-Tnr4z07r_W3G7dyF5WTk33W6__dn337KPzHW1byy-723p-n7W8VYfW41c0H9pW20dLtt4q-MCfVhVs2189gGDHW4Y8zNg1GB5QJW4v1LxF6yJpk4N1zKmxmT1PY0W5STLYg55c_p8W4zJN9c7MTL06W28Nf118S6k94f9jRms404" target="_blank" rel="noopener" data-start="2858" data-end="2972" data-hs-link-id="0" data-hs-link-id-v2="DBgI5xJy"&gt;Read the coverage&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 data-start="2974" data-end="3032"&gt;&lt;SPAN&gt;EU Opens Antitrust Probe Into Google’s AI Content Use&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-start="3033" data-end="3352"&gt;&lt;SPAN&gt;European regulators opened an antitrust investigation into whether Google unfairly uses publisher content to train and operate its AI systems. The case could influence how AI training data is sourced and compensated across the industry.&lt;/SPAN&gt;&lt;BR data-start="3269" data-end="3272" /&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VVVxJw8jZ6sjW1Kc0sC2gN9qmW5QKV2t5G_SWxN1HNy_43qn9qW95jsWP6lZ3kqW8mXSbq7CPWbfW4DfP4_7k7FY4W8JF7vp5xJjkBW3HYJ3R2XXNG0W1wGzpr5MQC69W3cfWpD27kxsHVwXw_44snHSjW4wzBPv4X-Ry_W515mlx1tX_yCW2t46hx1G1-C2W4mJdn82n__GLW8ggrlM8hPbS3W68WGmL3snTkpW81LDpZ4TZZ0pW5SM3RZ28NkXJW3RmsfM76s70CN4w9LNDNpkjmW35q2g12HBM1rVWkdg45MfnYhW44Yvh58Gvr6NW4cWdTV6bMrZdN8cGhYlsQD6ZW36jbNt4qJS3FW954Fly1-QzfQW77Sq-67fMD9YVS3_jl2qqQnKW1DmdVP8wZb9LW95_l1p4XFbDrN4QW-9Yf_mKQW2Pd07P5PLVHpdvGt6j04" target="_blank" rel="noopener" data-start="3275" data-end="3352" data-hs-link-id="0" data-hs-link-id-v2="xAox5Bkt"&gt;Read the story&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P data-start="3033" data-end="3352"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-start="3033" data-end="3352"&gt;&lt;SPAN&gt;From frontier model warnings to concrete security failures and emerging agentic risks, this week shows how tightly AI progress and real-world exposure are now linked. The pressure to move fast is only increasing, and so is the cost of getting security wrong.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;</description>
    <pubDate>Mon, 15 Dec 2025 14:32:34 GMT</pubDate>
    <dc:creator>_Val_</dc:creator>
    <dc:date>2025-12-15T14:32:34Z</dc:date>
    <item>
      <title>Lakera bulletin - This Week in AI #38</title>
      <link>https://community.checkpoint.com/t5/AI-Agents-Security/Lakera-bulletin-This-Week-in-AI-38/m-p/265310#M37</link>
      <description>&lt;DIV dir="ltr"&gt;
&lt;DIV lang="en"&gt;
&lt;P data-start="281" data-end="657"&gt;It’s been a busy week in AI, with new models shipping fast and security questions following close behind. We saw OpenAI raise the alarm on cyber risk at the frontier, fresh vulnerabilities surface in everyday developer tools, and new guidance emerge for securing agentic systems. At the same time, governments are still debating how much oversight makes sense as capabilities continue to scale.&lt;/P&gt;
&lt;P data-start="659" data-end="677"&gt;Let’s get into it.&lt;/P&gt;
&lt;H2 data-start="659" data-end="677"&gt;&lt;SPAN&gt;OpenAI Warns New Models Pose “High” Cybersecurity Risk&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-start="659" data-end="677"&gt;&lt;SPAN&gt;OpenAI said its upcoming frontier models may significantly increase cybersecurity capabilities, including the ability to identify and exploit software vulnerabilities. The warning reflects growing concern about how quickly offensive capabilities may scale alongside model performance.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV dir="ltr"&gt;
&lt;DIV lang="en"&gt;
&lt;DIV dir="ltr"&gt;
&lt;DIV class="hse-body-background" lang="en"&gt;
&lt;P data-start="739" data-end="1124"&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VVVxJw8jZ6sjW1Kc0sC2gN9qmW5QKV2t5G_SWxN1HNy_43qn9qW95jsWP6lZ3p1W61V_Dq938YJXW4V8MJC8P-qwDW43y1fz4P8ZW6W931vxX1TQXgjVlw4yR35T6wCW2VnpdC8Nt7vSW3kH6jm6kdNf4W7jvWHs7JS7V8W7MW92d194Z5tW70FfxH4cGtHFW4tQSZY1SH69JW71-pkw472lkhW7GqQJl1GKZWQW2kK3Sz1HjhrhVG_FhT8v8y0FVdCDR_6QWGDCW2sRtBF3rD4jYW3Vqwc82B4KpwMHrc42pb0bkW3svK8S2RSZywW6XBTR01Xn3BsN9kRJd8CP8G_N2xHtjgG_-5bW1_zdn922xjPjW6XrHXp1z-Cs9W584Ww449tG5xW8xCLRd8GxxXgW86cClD7bFDDpW3-R1DN2jSZClVB9mMm5F6J04f7lP19604" target="_blank" rel="noopener" data-start="1008" data-end="1124" data-hs-link-id="0" data-hs-link-id-v2="SLMLvfuY"&gt;Read the report&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 data-start="1126" data-end="1180"&gt;&lt;SPAN&gt;OpenAI Releases GPT-5.2 After Internal “Code Red”&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-start="1181" data-end="1572"&gt;&lt;SPAN&gt;OpenAI released GPT-5.2, its most capable model yet for coding, reasoning, and multimodal work, after speeding up development in response to Google’s Gemini 3. The launch shows how competitive pressure is shaping both release timelines and risk decisions.&lt;/SPAN&gt;&lt;BR data-start="1436" data-end="1439" /&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VVVxJw8jZ6sjW1Kc0sC2gN9qmW5QKV2t5G_SWxN1HNyZC5nXHCW69t95C6lZ3mkW8Ydkz94nxvyCW8lfYqj6ncDJ3W1CNwGG1R5gz4W6Q3wlx2P9gR6W1wqZzF1pcPgVW70VKFb5wzlQzW12PsbM8bZdDzW4f5TYh5sq8zyW1MKPsV5pfpZQW93KkCX7Hlw8cW6QgSRX8tcP4SW30K9qj49cdtpW6BjkYG2Vn1YZW4zRVJz34wHGWW6bJclW2G2xxTW2LvxmL6VP0krN80Jl7Ht3xxZW5qfjP-4twlgJW8tV_xl6NzPBtW4djqg57lJYvYW1f2jTW81T9ymW29JVsM3jCkrqW7VqHsv7chDtJW2qFWbZ5fWFg8W8_CZ5M55zwcgW8P8mJN4yt1cPW7gvf6W4xRw30Mw1jckG9SHxW38NnBG2yCVy8W2hr2-m4PqXwyW6gm0bv3xCYntW6TKgdB4wN0SdW2QG6k28sZsSgW30Swqd1MxkthMh-gk9qRqyFN5DNZG6XSg2pf3kW8sC04" target="_blank" rel="noopener" data-start="1442" data-end="1572" data-hs-link-id="0" data-hs-link-id-v2="ZhTc4XEO"&gt;Read the announcement&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 data-start="1574" data-end="1643"&gt;&lt;SPAN&gt;Critical Flaws Found in AI-Powered Developer Tools (“IDEsaster”)&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-start="1644" data-end="2094"&gt;&lt;SPAN&gt;Researchers disclosed dozens of serious vulnerabilities across popular AI-assisted IDEs, enabling data theft and remote code execution through poisoned prompts and extensions. As AI tools become standard in development workflows, these findings highlight a growing attack surface.&lt;/SPAN&gt;&lt;BR data-start="1924" data-end="1927" /&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VVVxJw8jZ6sjW1Kc0sC2gN9qmW5QKV2t5G_SWxN1HNy-v5nXHCW7Y9pgv6lZ3lxTB2bt3fDYXyW7pP7QH1ZsCDfN3-TRL34gYfsW9hsnNR5HQZWmN4RvHXq615Q4N8klybJKP5cNW8-fPwV2WXts5W170QgT1vWBQxW42FkQ764R4NwW8rYRlC29vVtmW2GhHZ02mD2SZW4wG6NW1yCd2vW2G7-SP3K87mCW5d3R6p4PJnd7N6lTFKkzL7KwW6k1_Yx2ytN_WVHSdsK2bFVxtVrGVGT3KJsDSW452kcw1_4qhpW88bCT-8qXn2WW2ByXZr4hdrnRW29vg7F2dw5l2W59tg2x3bzzJKV668dS1sm3c5W7mXV-96wQH1nW2vR6F45_Sqx3W62qF-Z4FRMrlN8jpjWcl9J0jW1Qh5Ll1hrmHYW32qW5h6Fz16pW2v_-Mq2YJQkrVpSbKZ6X-7qwW7R84xF6ywlSLW9khsbg62YMwYW3Y2m_57Gk_fsW5NdJTv88j73zW9dpVsx1BGsFMW66fqK55QLWZvN8Kk64pVHf8-VDmr_Z8HVhjsVtbf1b4vkqWVW4pTS7v7TMhKYd7w9GF04" target="_blank" rel="noopener" data-start="1930" data-end="2094" data-hs-link-id="0" data-hs-link-id-v2="6BwyS9yH"&gt;Read the disclosure&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P data-start="1644" data-end="2094"&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2 data-start="2096" data-end="2152"&gt;&lt;SPAN&gt;OWASP Releases Top 10 Risks for Agentic AI Security&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-start="2153" data-end="2562"&gt;&lt;SPAN&gt;The OWASP GenAI Security Project published its first Top 10 list focused on agentic AI, covering risks such as agent hijacking, unsafe tool use, and excessive autonomy. It offers practical guidance for teams building or deploying autonomous systems today.&lt;/SPAN&gt;&lt;BR data-start="2408" data-end="2411" /&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VVVxJw8jZ6sjW1Kc0sC2gN9qmW5QKV2t5G_SWxN1HNy-b5nXHCW7lCGcx6lZ3p3W2ZDB1P5DzcDXW2V-3Zc37kGn9W83Rj9W3d5LM6W6j4JGs8QSHtxW2xMn997HhPxYMLXQH4grZC1V9zkS98QF0yMW6vYpgY7B64xcW4s2tjM5Y07RRW86zdMc7HFKl6W2s7Ncr5XQV4dW6VGxJp5n0Jq-W3l6ZCj1ZKNLGW8HP9y41BPfvHW6tT90p2Yl7tLW2Pr2Ks5RLdLKW15RwBc7X_cTrW6w9WjR2HjXXyW2T2TX76J9xF5W11wpPN3fJfh2W3QyQ2N8mfb2_W6_X0QZ6kkCY3VQHvtS3zZRCjW3pN5vJ1Jg0PpW6477m155Qgq9W2wGmLS16Y8hMW8Pth1T5r6PWvV9GvMR7c4__SW4f9F3t5220c7W5BmfHX7KqvS7W6W8M2g1qq6qCVH9fsL9gdkgGW6dyKBC7TWFM-VDshLJ5yRq-cW1T0sz1899rrhW9gnDLF8dBdk6VdDmf095wd0zVVsdKV89Ng5dW6J9v3V3ftjlTW7fQsm84y55pKf7fvzXl04" target="_blank" rel="noopener" data-start="2414" data-end="2562" data-hs-link-id="0" data-hs-link-id-v2="nmrLewJI"&gt;Explore the Top 10&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 data-start="2564" data-end="2625"&gt;&lt;SPAN&gt;Trump Signs Executive Order Blocking State AI Regulation&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-start="2626" data-end="2972"&gt;&lt;SPAN&gt;A new U.S. executive order prevents states from enforcing their own AI regulations, shifting authority to the federal level. Supporters argue it reduces fragmentation, while critics worry it limits meaningful safety oversight.&lt;/SPAN&gt;&lt;BR data-start="2852" data-end="2855" /&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VVVxJw8jZ6sjW1Kc0sC2gN9qmW5QKV2t5G_SWxN1HNyZC5nXHCW69t95C6lZ3pRW1_22NZ371sZ2W9cKWj66plLWZW7HnPlh38r6m1W1mwVFV6cbgw4W4d2p1D3rDW4sVHc2SN8pDFnhW4dml6514cl7TW7mDbfn4lvD7RW7czWn_95fr2hW6-sxbH20z284W1Zhn5R4jsx5BW1hvVRz5Z0_mvW6L4Jpn6MVJxzN28XZQnXVH0HW7dfsZF63sZRBW2qLy_V8yqBvlVXRBv6643qq-W8wFYZj70c-pkW2tNXW_2J-7znVlk-KT8N9YHVN39nzlR55vLqW7b_Cdr2-vXkWW6k-L2D65TC1KW2q-Tnr4z07r_W3G7dyF5WTk33W6__dn337KPzHW1byy-723p-n7W8VYfW41c0H9pW20dLtt4q-MCfVhVs2189gGDHW4Y8zNg1GB5QJW4v1LxF6yJpk4N1zKmxmT1PY0W5STLYg55c_p8W4zJN9c7MTL06W28Nf118S6k94f9jRms404" target="_blank" rel="noopener" data-start="2858" data-end="2972" data-hs-link-id="0" data-hs-link-id-v2="DBgI5xJy"&gt;Read the coverage&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 data-start="2974" data-end="3032"&gt;&lt;SPAN&gt;EU Opens Antitrust Probe Into Google’s AI Content Use&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P data-start="3033" data-end="3352"&gt;&lt;SPAN&gt;European regulators opened an antitrust investigation into whether Google unfairly uses publisher content to train and operate its AI systems. The case could influence how AI training data is sourced and compensated across the industry.&lt;/SPAN&gt;&lt;BR data-start="3269" data-end="3272" /&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":link:"&gt;🔗&lt;/span&gt; &lt;A href="https://d31-0l04.eu1.hubspotlinks.com/Ctc/L0+113/d31-0L04/VVVxJw8jZ6sjW1Kc0sC2gN9qmW5QKV2t5G_SWxN1HNy_43qn9qW95jsWP6lZ3kqW8mXSbq7CPWbfW4DfP4_7k7FY4W8JF7vp5xJjkBW3HYJ3R2XXNG0W1wGzpr5MQC69W3cfWpD27kxsHVwXw_44snHSjW4wzBPv4X-Ry_W515mlx1tX_yCW2t46hx1G1-C2W4mJdn82n__GLW8ggrlM8hPbS3W68WGmL3snTkpW81LDpZ4TZZ0pW5SM3RZ28NkXJW3RmsfM76s70CN4w9LNDNpkjmW35q2g12HBM1rVWkdg45MfnYhW44Yvh58Gvr6NW4cWdTV6bMrZdN8cGhYlsQD6ZW36jbNt4qJS3FW954Fly1-QzfQW77Sq-67fMD9YVS3_jl2qqQnKW1DmdVP8wZb9LW95_l1p4XFbDrN4QW-9Yf_mKQW2Pd07P5PLVHpdvGt6j04" target="_blank" rel="noopener" data-start="3275" data-end="3352" data-hs-link-id="0" data-hs-link-id-v2="xAox5Bkt"&gt;Read the story&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P data-start="3033" data-end="3352"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P data-start="3033" data-end="3352"&gt;&lt;SPAN&gt;From frontier model warnings to concrete security failures and emerging agentic risks, this week shows how tightly AI progress and real-world exposure are now linked. The pressure to move fast is only increasing, and so is the cost of getting security wrong.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;</description>
      <pubDate>Mon, 15 Dec 2025 14:32:34 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/AI-Agents-Security/Lakera-bulletin-This-Week-in-AI-38/m-p/265310#M37</guid>
      <dc:creator>_Val_</dc:creator>
      <dc:date>2025-12-15T14:32:34Z</dc:date>
    </item>
    <item>
      <title>Re: Lakera bulletin - This Week in AI #38</title>
      <link>https://community.checkpoint.com/t5/AI-Agents-Security/Lakera-bulletin-This-Week-in-AI-38/m-p/265312#M38</link>
      <description>&lt;P&gt;Another great post about Lakera.&lt;/P&gt;</description>
      <pubDate>Mon, 15 Dec 2025 14:45:11 GMT</pubDate>
      <guid>https://community.checkpoint.com/t5/AI-Agents-Security/Lakera-bulletin-This-Week-in-AI-38/m-p/265312#M38</guid>
      <dc:creator>the_rock</dc:creator>
      <dc:date>2025-12-15T14:45:11Z</dc:date>
    </item>
  </channel>
</rss>

