🔗 Read the report
OpenAI Releases GPT-5.2 After Internal “Code Red”
OpenAI released GPT-5.2, its most capable model yet for coding, reasoning, and multimodal work, after speeding up development in response to Google’s Gemini 3. The launch shows how competitive pressure is shaping both release timelines and risk decisions.
🔗 Read the announcement
Critical Flaws Found in AI-Powered Developer Tools (“IDEsaster”)
Researchers disclosed dozens of serious vulnerabilities across popular AI-assisted IDEs, enabling data theft and remote code execution through poisoned prompts and extensions. As AI tools become standard in development workflows, these findings highlight a growing attack surface.
🔗 Read the disclosure
OWASP Releases Top 10 Risks for Agentic AI Security
The OWASP GenAI Security Project published its first Top 10 list focused on agentic AI, covering risks such as agent hijacking, unsafe tool use, and excessive autonomy. It offers practical guidance for teams building or deploying autonomous systems today.
🔗 Explore the Top 10
Trump Signs Executive Order Blocking State AI Regulation
A new U.S. executive order prevents states from enforcing their own AI regulations, shifting authority to the federal level. Supporters argue it reduces fragmentation, while critics worry it limits meaningful safety oversight.
🔗 Read the coverage
EU Opens Antitrust Probe Into Google’s AI Content Use
European regulators opened an antitrust investigation into whether Google unfairly uses publisher content to train and operate its AI systems. The case could influence how AI training data is sourced and compensated across the industry.
🔗 Read the story
From frontier model warnings to concrete security failures and emerging agentic risks, this week shows how tightly AI progress and real-world exposure are now linked. The pressure to move fast is only increasing, and so is the cost of getting security wrong.