Security Researcher Exposes AI Vulnerabilities at Tech Firms

A security researcher, Aonan Guan, has demonstrated methodical attacks on AI agents from Anthropic, Google, and Microsoft, using prompt injection techniques to steal sensitive credentials such as API keys and GitHub tokens. The major tech companies responded by paying varying bug bounties; however, details remain undisclosed for Google’s payment. Despite the discovery, none of the companies assigned a CVE (Common Vulnerabilities and Exposures), raising questions on transparency and future implications in AI security.