Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
_Val_
Admin
Admin

Lakera bulletin - This Week in AI #38

It’s been a busy week in AI, with new models shipping fast and security questions following close behind. We saw OpenAI raise the alarm on cyber risk at the frontier, fresh vulnerabilities surface in everyday developer tools, and new guidance emerge for securing agentic systems. At the same time, governments are still debating how much oversight makes sense as capabilities continue to scale.

Let’s get into it.

OpenAI Warns New Models Pose “High” Cybersecurity Risk

OpenAI said its upcoming frontier models may significantly increase cybersecurity capabilities, including the ability to identify and exploit software vulnerabilities. The warning reflects growing concern about how quickly offensive capabilities may scale alongside model performance.

🔗 Read the report

OpenAI Releases GPT-5.2 After Internal “Code Red”

OpenAI released GPT-5.2, its most capable model yet for coding, reasoning, and multimodal work, after speeding up development in response to Google’s Gemini 3. The launch shows how competitive pressure is shaping both release timelines and risk decisions.
🔗 Read the announcement

Critical Flaws Found in AI-Powered Developer Tools (“IDEsaster”)

Researchers disclosed dozens of serious vulnerabilities across popular AI-assisted IDEs, enabling data theft and remote code execution through poisoned prompts and extensions. As AI tools become standard in development workflows, these findings highlight a growing attack surface.
🔗 Read the disclosure

 

OWASP Releases Top 10 Risks for Agentic AI Security

The OWASP GenAI Security Project published its first Top 10 list focused on agentic AI, covering risks such as agent hijacking, unsafe tool use, and excessive autonomy. It offers practical guidance for teams building or deploying autonomous systems today.
🔗 Explore the Top 10

Trump Signs Executive Order Blocking State AI Regulation

A new U.S. executive order prevents states from enforcing their own AI regulations, shifting authority to the federal level. Supporters argue it reduces fragmentation, while critics worry it limits meaningful safety oversight.
🔗 Read the coverage

EU Opens Antitrust Probe Into Google’s AI Content Use

European regulators opened an antitrust investigation into whether Google unfairly uses publisher content to train and operate its AI systems. The case could influence how AI training data is sourced and compensated across the industry.
🔗 Read the story

 

From frontier model warnings to concrete security failures and emerging agentic risks, this week shows how tightly AI progress and real-world exposure are now linked. The pressure to move fast is only increasing, and so is the cost of getting security wrong.

1 Reply
the_rock
MVP Platinum
MVP Platinum

Another great post about Lakera.

Best,
Andy
0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Useful Links

Will be added shortly