Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
_Val_
Admin
Admin

Lakera bulletin - This week in AI - Nov 17, 2025

It’s been a packed week in AI security: from real-world attacks leveraging commercial frontier models, to fresh vulnerabilities, to new hardware pushes from China, and a wave of new data showing how unprepared many organisations still are. We’re also bringing you a brand-new deep dive from Lakera on fast-emerging agentic risks.

China-Linked Threat Actors Leverage Anthropic AI in Major Cyberattack

A China-based hacking group reportedly used Anthropic’s model to run a largely autonomous cyber operation targeting corporations and government entities. The incident highlights how readily accessible generative-AI systems are becoming part of the offensive toolkit.
🔗 Read the story

Newly Disclosed ChatGPT Vulnerability Exposed Underlying Cloud Infrastructure

A recently revealed flaw in ChatGPT may have exposed components of the service’s cloud environment, underscoring that AI system risk extends far beyond chatbot interfaces. The case reinforces why AI infrastructure hardening is becoming as important as model-level safeguards.
🔗 Read the story

Survey: 34% of Companies With AI Workloads Have Already Faced an AI-Related Breach

New survey results show more than a third of organisations running AI workloads have already experienced an AI-driven security incident. This mirrors findings from Lakera’s 2025 GenAI Security Readiness Report, where 15% of companies reported a GenAI-related incident and only 4% expressed high confidence in their security posture.
🔗 Read the story

Baidu Unveils New AI Chips and Major ERNIE Model Upgrade

Baidu introduced two home-grown AI processors alongside a substantial update to its ERNIE foundation model, reinforcing China’s push for self-sufficient AI infrastructure. The launch signals continued acceleration in the global race for compute and model sovereignty.
🔗 Read the story

US Military Begins Adopting OpenAI’s Open-Weight Models

OpenAI’s open-weight models are now being tested and adopted within military and defense-contractor environments. The move reflects a broader industry shift toward locally deployable, auditable models in highly sensitive contexts.
🔗 Read the story

“Shadow AI” Emerges as a Top Concern Among Security Teams

A new survey shows 72% of security professionals now view unmanaged, employee-driven AI usage as a key attack surface. Lakera’s own data echoes this trend: with only 49% of organisations feeling well-prepared and 39% citing internal talent gaps as a blocker to GenAI readiness.
🔗 Read the story

ICYMI: Agentic AI Threats Are Accelerating

Lakera’s latest article, Agentic AI Threats, Part 2, highlights how over-privileged tools and uncontrolled agent browsing create new exploit pathways. With only ~14% of organisations deploying production agents with runtime guardrails, the gap between capability and safety is widening fast.
🔗 Read the article

 

From real attacks to emerging agentic risks, this week shows how quickly the AI security landscape is evolving, and why readiness must keep pace with capability.

See you next week!

  • AI
1 Reply
the_rock
MVP Diamond
MVP Diamond

Always love reading these.

Best,
Andy
"Have a great day and if its not, change it"
0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Useful Links

Will be added shortly