Stanford researchers quantified what crypto natives already knew: AI is flooding the internet. A third of new websites are AI-generated, according to their study. The catch? The findings differ from what doomers expected.
The research reveals AI content creation accelerated faster than most predicted, but quality varies wildly. Not all AI-generated sites are spam or scams. Some serve legitimate purposes. Others clearly exist to game search rankings or spread misinformation. The data shows a bifurcation forming.
This matters for crypto because it amplifies existing problems. AI-generated FUD spreads easier. Rug pull promotions become harder to distinguish from legitimate projects. Scammers now automate social engineering at scale. Detection gets tougher when bots generate thousands of convincing but fake token websites overnight.
On-chain verification becomes more valuable in this environment. When the surface web floods with noise, blockchain's immutable record gains credibility. Holders increasingly need to verify directly on-chain rather than trusting web content.
The Stanford findings confirm what you've probably noticed scrolling Twitter or reading project docs. The internet got noisier. AI accelerated the process. Expect more pressure on web3 platforms to become primary sources of truth.
