Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Askal
Employee
Employee

Lakera Bulletin - This Week in AI: Bigger models, sharper tools, and growing security gaps

It’s been a big week for AI: from OpenAI’s latest leap toward autonomous systems to fresh reminders that securing these systems is getting harder, not easier. We’re also seeing how new capabilities, especially in multimodal AI and coding assistants, are introducing entirely new classes of risk.

Let’s get into it.

 

OpenAI Releases GPT-5.5

OpenAI has officially launched GPT-5.5, a new class of intelligence designed for autonomous task execution. This model features major breakthroughs in reasoning and tool-use, allowing it to plan and complete complex workflows,  like multi-file coding and deep research,  with minimal human guidance.
🔗 Read the announcement

Unauthorized Access to Anthropic Model Raises Containment Concerns

A frontier Anthropic model was reportedly accessed by unauthorized users through a third-party environment shortly after release. The incident highlights the difficulty of securely containing powerful AI systems once they leave tightly controlled settings.
🔗 Read the full story

OpenAI Launches New Image Generation System

Alongside its latest LLM, OpenAI introduced ChatGPT Images 2.0. This major upgrade features a "Thinking Mode" for visual reasoning, allowing the system to generate complex, structured assets like UI mockups, diagrams, and consistent character sheets from a single prompt.
🔗 Explore the release

Your AI Coding Assistant Just Shipped Your API Keys

Your latest deployment might have a stowaway. We’ve discovered that AI coding assistants are unintentionally caching sensitive credentials in local hidden files,  which then get swept up during public package releases. After scanning over 46,000 npm packages, we found dozens of exposed secrets, marking a critical new vulnerability in the automated software supply chain.
🔗 Read the research

From Access Control to Outcome Control for AI Agents

In the world of agentic AI, knowing who is logged in isn't enough; you need to know what they are doing in real-time. We’re moving beyond static permissions toward "Outcome Control." Together with Check Point and Google Cloud, we are pioneering a new security layer that enforces safe behavior across complex agent workflows, ensuring that autonomy never comes at the cost of integrity.
🔗 Learn more

 

 

The Enterprise Playbook for Agentic AI Security

AI systems now retrieve data, invoke tools, and act across enterprise workflows.
Get the playbook to learn how to secure AI across employees, applications, and agents.

👉 Explore the Playbook

 

From more capable models to more subtle vulnerabilities, this week shows how quickly the AI landscape is evolving, and how security needs to evolve just as fast.

See you next week!

 

 

0 Replies

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Useful Links

Will be added shortly