- Products
- Learn
- Local User Groups
- Partners
- More
AI Security Solutions
from Check Point
It’s been a busy week in AI security, with new vulnerabilities emerging in agentic tooling, fresh research showing how stylistic prompts can bypass safety filters, and ongoing discussions about supply-chain risks and open-source guardrails. Add in a major enterprise model update and a couple of upcoming events (including one from the Lakera team), and there’s plenty to cover.
Let’s jump right into it.
A malicious prompt hidden inside a reference guide can trick Antigravity into running terminal commands, reading .env files, and exfiltrating credentials through its browser subagent. Default settings make the attack easy to miss, highlighting real risks in agentic IDE workflows.
🔗 Read the Antigravity exploit write-up
A new paper finds that turning harmful prompts into verse dramatically improves jailbreak success across 25 frontier models: up to 18× more effective than prose. It exposes a systemic gap in current alignment methods: stylistic shifts alone can dismantle safety filters.
🔗 Read the adversarial poetry paper
Published earlier this month, the Whisper Leak paper demonstrates how attackers can infer sensitive prompt topics from encrypted LLM traffic by analyzing packet size and timing patterns. Tested across 28 major models, the attack achieves near-perfect classification, even identifying topics like “money laundering” with 100% precision, showing how metadata alone can compromise privacy under network surveillance.
🔗 Read the Whisper Leak paper
Security analysts warn that open-source fragility and increasingly automated attack tooling will shape the 2026 threat landscape. With AI driving both exploitation and defense, supply-chain security continues to grow in importance.
🔗 Read the 2026 outlook
RuleHub introduces an open-source, “policy-as-code” framework aimed at helping teams define and enforce safety and governance rules across ML and LLM workflows. It’s an interesting entry in the growing ecosystem of community-driven guardrail tooling, especially for teams experimenting with lightweight or DIY approaches.
🔗 Explore RuleHub
Anthropic’s latest flagship boosts reasoning, code generation, and long-running agent workflows, aiming to serve as a full-stack enterprise assistant. Another step in the growing race for frontier-grade business models.
🔗 Read the Claude Opus 4.5 coverage
Check Point’s December 4 virtual event dives into securing AI-powered innovation, and Lakera will be part of the discussion. It’s a great chance to hear how our combined teams are approaching hybrid mesh security and AI-agent defense heading into 2026.
🔗 Register for the event
On December 10, we’re hosting a look back at 2025’s biggest AI-driven threats and what’s coming next. Mateo Rojas-Carulla, David Haber, and guest practitioners will unpack real attack trends and how defenders are preparing for 2026. If you work with AI in production, you’ll want to join us.
🔗 Save your spot
About CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY