Interviews, insight & analysis on digital media & marketing

The AI snake eating its own tail: how zero-click search is devouring quality  journalism and its own knowledge supply

By Virginie Goupilleauaward-winning media leader and digital transformation expert

Having spent two decades navigating the world of media agencies and holding companies, I’ve seen my share of seismic shifts. But nothing, and I mean nothing, has shaken the foundation of our industry quite like AI. It’s a marvel to behold, yet I can’t help but feel a deep sense of unease. 

What I’m seeing now isn’t just another industry disruption; it feels like the early stages of digital self-cannibalism. AI systems systematically drain their own data sources while promising infinite knowledge; a technological ouroboros devours its own tail until nothing remains. 

The numbers are staggering, and the industry’s response is worryingly inadequate so far. 

The carnage in numbers 

The statistics emerging from 2025 are hard to ignore, and paint a picture of an ecosystem  in freefall: 

• Around 60% of Google searches in the US and EU end in zero clicks. Users receive answers directly from search results or AI summaries, eliminating the need to visit a website. 

• Since the launch of AI Overviews, publishers have seen click-through rates fall by 15-64%, depending on industry and query type. 

• In the US, organic search referrals dropped from 2.3 billion in July 2024 to 1.8  billion in June 2025. That’s a 21% decline in less than a year. 

• For major publishers such as CBS News, Yahoo News, and MSN, up to 75% of top keywords now trigger AI overviews with no click-through at all. 

• And that’s without counting the estimated 3.25 billion daily AI prompts currently being processed via ChatGPT, Perplexity, Gemini, and Claude. Each one potentially answers questions without sending users to the original sources.

Here’s the catch: AI referral traffic from these platforms remains pathetically small and represents just 36 million global visits in June 2025. Even top publishers like Yahoo,  Reuters, and The Guardian are receiving only 1-2 million monthly visits from AI sources.  It’s not even close to offsetting what’s being devoured. 

The self-destruction paradox 

This is where the snake begins eating its own tail. While AI companies are hoovering up content at unprecedented rates for training, experts are warning that we’re approaching a “data cliff”. “The cumulative sum of human knowledge has been exhausted in AI  training,” Elon Musk said. “That happened basically last year.” 

According to research from Epoch AI, we’ll exhaust high-quality training data by 2027 if current consumption patterns continue. AI companies have created a system that threatens the economic foundation that produces original, quality content, i.e. the very system it feeds off. And the cascading consequences for them are equally serious: 

1. Depleted source material: Quality journalism requires investment in original reporting, investigation, analysis and human critical thinking and intelligence.  When the economic model supporting this work collapses, less reliable and  trustworthy content gets created, starving AI models of the high-quality data they  need to learn or service their users. 

2. Threatened diversity and inclusion: Smaller, independent voices, often representing marginalised communities, are squeezed out first when monetisation becomes impossible, creating information homogenisation and bias that AI systems then perpetuate and amplify. 

3. Increased hallucination risks: As high-quality data becomes scarce, AI models begin producing less accurate outputs, undermining the very promise of reliable artificial intelligence. 

4. Eroded trust in information: As journalism declines and AI hallucinations increase, public trust in information sources will decline, jeopardising their business model before they even start to make money out of it. 

5. Overlooked ethical considerations: High-quality journalism doesn’t just inform,  it drives social and political progress by exposing problems and holding power to account. It also feeds AI learning with critical, ethical human input, which is primordial for these systems’ long-term viability. When this disappears, we don’t just lose journalism; we lose the intellectual diversity that keeps AI systems grounded in human reality. 

Even Sam Altman recently acknowledged the “dead internet theory” could be true, the idea that the internet is becoming dominated by AI-generated content and crawled by 

bots rather than humans. When the head of OpenAI is worried about digital authenticity,  we should all be paying attention! 

Why this is an advertiser crisis too (not just a publisher problem) 

At first glance, this may feel like a publisher problem. It isn’t. It’s an advertiser crisis that threatens the entire commercial internet. 

Weakened audience definition: Publishers supply the rich first-party data that powers targeted advertising. With users staying inside AI answers, that data will disappear. Fewer visits mean fewer impressions, reduced ad inventory, smaller target audiences, and declining returns on campaign investments. 

Increased Brand Safety Risks: As high-quality journalism loses economic footing, advertisers risk being left with a substandard content ecosystem where misinformation and low-quality content become the norm. 

Vanishing Contextual Targeting: Premium content environments providing interest-based, relevant contexts for brand messaging are likely to deplete over time, forcing advertisers into increasingly commoditised, context-free advertising spaces. 

Further Market Consolidation: Investing in a handful of dominant, undiversified platforms is risky. If those platforms own the audience and the data, advertisers will be penalised on price and negotiating power, directly impacting their returns on investment. 

Advertising follows attention, scale and above all ROAS. When 2.3 billion visits become 1.8 billion in eleven months, that’s not just a publisher revenue problem; it’s a fundamental shrinking of the commercial internet’s addressable audience. 

Publishers Fight Back (But the Snake Keeps Eating) 

When your audience disappears, so does your ability to monetise it. 

Let’s consider the fundamentals here: fewer website visits mean less on-site advertising inventory, and reduced earnings from both programmatic and branded deals.  We’re not talking about marginal losses here.  

The cruel irony? AI referral traffic from ChatGPT, Perplexity, and Gemini is growing but remains pathetically small, just 36 million global visits in June 2025. Even top publishers  like Yahoo, Reuters, and The Guardian are receiving only 1-2 million monthly visits from  these sources. It’s not even close to offsetting the losses. 

To their credit, publishers aren’t going down without a fight. The response has been swift and increasingly sophisticated, beyond the increasingly crippling lawsuits.

Blocking AI crawlers: Millions of sites have already restricted AI training through managed robots.txt features or blocking rules using technology from companies ike Cloudflare. Major outlets such as The New York Times, The Wall Street Journal,  Vox, and Reuters now block AI crawlers entirely. However, these blocking solutions don’t seem to be 100% reliable if we judge it by the latest news on  Trusted Reviews being hit by a digital blitz of 1.6 million content scrapes in one day by OpenAI, all while having a robots.txt file.  

Answer Engine Optimisation: Content is being restructured to be cited inside AI  summaries, ensuring visibility even without clicks. However, a mere mention is no revenue.  

Alternative commercial models: Memberships, newsletters and community-first approaches are growing. Many are experimenting with content partnerships on  LinkedIn, TikTok and YouTube, and podcasts, where engagement still translates into revenue. Pay-per-crawl or answer engine optimisation models are also being evaluated to give publishers more control over how AI companies access their sites and then monetise their content. 

AI visibility tracking: Tools like SEMrush and Ziptie now measure citations within  AI results, shifting KPIs from page rank to share-of-voice inside generative answers. Yet a commercial model still needs to be established here.  

None of these measures currently offsets the scale of the losses. Even combined, they remain defensive manoeuvres against opponents with deep pockets and little incentive to change. 

What needs to happen next (before the snake finishes its meal) 

The question isn’t whether publishers can adapt. They always have. The question is rather whether all participants in this new ecosystem can create a new sustainable model that economically will benefit all whilst still promoting fair, ethical, diverse sources of information for the end users before the entire system collapses under its own contradictions. 

For advertisers, agencies, AI companies, and regulators, the stakes couldn’t be clearer: 

Advertisers should demand sustainable content partnerships. If AI platforms  become the new gatekeepers, brands should push for transparency, fair licensing  models and compensation for publishers fuelling the ecosystem. Conscious investment in quality journalism must remain a strategic priority. 

Agencies must review their measurement frameworks. Share-of-voice inside  AI summaries, brand presence in zero-click environments, and in-platform  engagement need tracking alongside traditional measurement metrics. The old metrics won’t work in a world where the audience never leaves the search results page. 

AI companies must recognise their existential dependence on content creators. AI companies must work collaboratively with their information sources 

and find an equitable way to compensate them for their content consumption in this new economy. Failing that, they will lose the very reliable knowledge supply they use to service the end consumer. 

Regulators cannot ignore this imbalance. Just as the music and film industries fought for fair use frameworks, journalism requires economic models that align incentives between creators and platforms. The current system is unsustainable for everyone involved. 

The brand new RSL (real simple Licensing) system, invented by the creator of RSS, already backed by Reddit, Quora, Yahoo, and Medium, could offer a great collective solution. It offers publishers control through a licensing mechanism. They can code rules directly into their sites via robots.txt and set their content access preferences and relevant commercial terms. The AI companies pay once, and the royalties are split among publishers, just as music royalties are. Whilst it is not perfect, it is the first attempt at a common protocol that all publishers and AI tools can tap into.

The bottom line: stop the snake before it’s too late! 

AI’s promise of infinite knowledge rests entirely on human creativity and insight, the very resources it is making economically unviable. We are witnessing a classic case of digital cannibalisation where the snake is devouring its own tail until nothing remains. 

Unless we find a way to align the economic interests of publishers and content creators,  advertisers and AI developers, we risk not just the collapse of publishing, but the  degradation of advertising, of trust in AI itself, and of human knowledge itself. When AI  systems can no longer distinguish between reliable information and synthetic noise,  when diverse voices disappear from the training data, when original content becomes  economically impossible, the promise of artificial intelligence becomes a hollow shell. 

The snake is already halfway through its tail. Advertisers, publishers, AI corporations, and  regulators must act together now, or we’ll all watch the internet eat itself alive.