- Products
- Learn
- Local User Groups
- Partners
- More
AI Security Solutions
from Check Point
It’s been a packed week in AI security: from real-world attacks leveraging commercial frontier models, to fresh vulnerabilities, to new hardware pushes from China, and a wave of new data showing how unprepared many organisations still are. We’re also bringing you a brand-new deep dive from Lakera on fast-emerging agentic risks.
A China-based hacking group reportedly used Anthropic’s model to run a largely autonomous cyber operation targeting corporations and government entities. The incident highlights how readily accessible generative-AI systems are becoming part of the offensive toolkit.
🔗 Read the story
A recently revealed flaw in ChatGPT may have exposed components of the service’s cloud environment, underscoring that AI system risk extends far beyond chatbot interfaces. The case reinforces why AI infrastructure hardening is becoming as important as model-level safeguards.
🔗 Read the story
New survey results show more than a third of organisations running AI workloads have already experienced an AI-driven security incident. This mirrors findings from Lakera’s 2025 GenAI Security Readiness Report, where 15% of companies reported a GenAI-related incident and only 4% expressed high confidence in their security posture.
🔗 Read the story
Baidu introduced two home-grown AI processors alongside a substantial update to its ERNIE foundation model, reinforcing China’s push for self-sufficient AI infrastructure. The launch signals continued acceleration in the global race for compute and model sovereignty.
🔗 Read the story
OpenAI’s open-weight models are now being tested and adopted within military and defense-contractor environments. The move reflects a broader industry shift toward locally deployable, auditable models in highly sensitive contexts.
🔗 Read the story
A new survey shows 72% of security professionals now view unmanaged, employee-driven AI usage as a key attack surface. Lakera’s own data echoes this trend: with only 49% of organisations feeling well-prepared and 39% citing internal talent gaps as a blocker to GenAI readiness.
🔗 Read the story
Lakera’s latest article, Agentic AI Threats, Part 2, highlights how over-privileged tools and uncontrolled agent browsing create new exploit pathways. With only ~14% of organisations deploying production agents with runtime guardrails, the gap between capability and safety is widening fast.
🔗 Read the article
From real attacks to emerging agentic risks, this week shows how quickly the AI security landscape is evolving, and why readiness must keep pace with capability.
See you next week!
Always love reading these.
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
| User | Count |
|---|---|
| 2 | |
| 2 | |
| 1 | |
| 1 |
Will be added shortly
Tue 19 May 2026 @ 06:00 PM (IDT)
AI Security Masters E8 - Claude Myphos: New Era in Cyber SecurityTue 19 May 2026 @ 06:00 PM (IDT)
AI Security Masters E8 - Claude Myphos: New Era in Cyber SecurityTue 19 May 2026 @ 06:00 PM (IDT)
AI Security Masters E8 - Claude Myphos: New Era in Cyber SecurityAbout CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY