Google's security team scanned billions of web pages and found active malicious payloads targeting AI agents. The attacks work by embedding hidden instructions into websites that trick autonomous AI systems into executing commands they shouldn't.

The payloads go after real targets. Attackers are crafting pages designed to compromise PayPal accounts, delete files, and steal credentials. The threat isn't theoretical. Google found actual working exploits in the wild.

This hits different from traditional phishing. Human users can spot obvious scams. AI agents running on automation often can't. A bot scraping data or executing trades follows instructions embedded in a webpage without the skepticism a person applies.

The vulnerability exposes a gap in how AI agents interact with untrusted web content. As more teams deploy autonomous systems to handle finance, data management, and other sensitive tasks, this attack surface widens fast.

The implications matter for crypto holders and defi users specifically. Smart contract interactions, wallet transactions, and exchange integrations increasingly rely on automated agents. If those agents hit a malicious webpage, adversaries gain direct access to move money or drain accounts.

Google didn't disclose specific mitigation strategies yet, but the finding signals that AI agent security is now a front-line concern. Anyone running bots or automated systems needs to assume the websites they touch are potentially hostile.