- Products
- Learn
- Local User Groups
- Partners
- More
AI Security Solutions
from Check Point
Hi CheckMates!
Welcome back! Hope you had a restful holiday break. We’re kicking off the year with your weekly dose of AI news, and the theme is clear: agents are getting more capable, and the security stakes are rising right alongside them.
Let’s get into it.
Security researchers observed 91,000+ attack sessions targeting AI infrastructure between October 2025 and January 2026, including systematic probing of LLM endpoints. The takeaway: AI deployments are now a mainstream target class, and defenses need to look more like “production security” than “prototype security.”
🔗 Read the report
OpenAI shared how it’s continuously hardening its Atlas agent against prompt injection, with layered defenses around trust boundaries, tool use, and automated red-teaming. It’s one of the most detailed descriptions we’ve seen so far of what prompt defense looks like according to OpenAI.
🔗 Read the security deep dive
Researchers showed IBM’s coding agent could be manipulated via prompt injection to run risky commands, including downloading and executing malware. Another reminder that tool-enabled agents don’t just “say” dangerous things, they can do them if guardrails fail.
🔗 Read the disclosure
A Chaos Communication Congress talk walks through end-to-end exploits against computer-use and coding agents, illustrating how attacker-controlled content can hijack agent behavior. If you’re building or deploying agents today, this is a must-watch for threat modeling.
🔗 Watch the talk
Sam Altman warned that more autonomous AI agents could become powerful tools for attackers if safety and security don’t keep up. The broader point: the barrier to sophisticated cyber operations may keep dropping as agents get better at chaining actions.
🔗 Read the article
NVIDIA announced the Rubin platform, positioning it as the next big step for training and running frontier-scale AI systems. It’s another signal that the compute arms race is accelerating, and that “agentic workloads” are quickly becoming a first-class hardware target.
🔗 See the announcement
NousCoder-14B is a new open-source coding model landing right as demand for coding agents surges. The pace of open releases keeps tightening the gap between proprietary assistants and what teams can run (and customize) themselves.
🔗 Explore the release
Anthropic is reportedly preparing a funding round aiming at a $350B valuation, a massive signal of how aggressively capital is concentrating in frontier AI labs. Big valuations also mean big expectations, especially around reliability, safety, and enterprise readiness.
🔗 Read more
OpenAI announced “OpenAI for Healthcare,” positioning secure AI products for healthcare organizations with a focus on protecting health data and supporting compliance needs. As AI moves deeper into regulated, high-stakes environments, security and privacy stop being differentiators, they become table stakes.
🔗 Read the announcement
From exploited agents to hardened defenses, and from new chips to new clinical deployments, this week makes one thing clear: AI is restarting the year at full speed, and security is now part of the core story.
Excellent 🙌
Leaderboard
Epsum factorial non deposit quid pro quo hic escorol.
| User | Count |
|---|---|
| 2 | |
| 1 | |
| 1 | |
| 1 |
Will be added shortly
About CheckMates
Learn Check Point
Advanced Learning
YOU DESERVE THE BEST SECURITY