The Problem: A Broken Content Ecosystem
We’re watching the collapse of the web’s economic model in real-time, and everyone knows it.

AI assistants have fundamentally changed how people consume information. Why wade through ten articles when Claude, ChatGPT, or Gemini can synthesize an answer in seconds? Why maintain 100 browser tabs for research when AI can connect the dots for you? The user experience is undeniably better—not because AI provides better quality than human research, but because humans will always trade some quality for massive time and effort savings.
The numbers bear this out. Traditional search traffic is declining. Publishers are hemorrhaging ad revenue. Quality journalism is becoming economically unviable. Meanwhile, AI platforms are training on and retrieving from this very content to provide their valuable summaries—without the economic feedback loop that sustains content creation.
Here’s what we know about human behavior:
- People don’t read, they scan and skim
- People hate ads—unless they’re relevant at exactly the right time and place
- People love AI because it reduces cognitive load
- People will accept “good enough” AI answers over “perfect” human research every single time
The current system has created a parasitic relationship: AI platforms extract value from content while publishers watch their business models crumble. Something has to give.
Why Current Solutions Don’t Work
Let’s examine the “solutions” being proposed:
Paywalls and robots.txt blocking Publishers can block AI crawlers, but this is economic suicide. If your content isn’t in the AI’s training data or retrieval systems, you become invisible to the next generation of users. You’re choosing between slow death (blocked from AI) and fast death (AI cannibalizes your traffic).
Litigation and licensing deals The New York Times sues OpenAI. News Corp signs deals with Google. These create a two-tier system: major publishers with legal teams get paid, everyone else gets exploited. It’s not scalable, it’s not fair, and it doesn’t solve the systemic problem.
Current ad models Traditional display advertising is already failing. The problem isn’t ads themselves—it’s the lack of true personalization and the low “right time, right place” factor. Most ads are visual pollution that users have learned to ignore or block.
Post-hoc citation bolting Some AI systems like Gemini use “generate-then-ground” approaches—they create an answer first, then try to find sources that support it. This is a bandaid solution that doesn’t truly attribute content and can’t reliably compensate creators. (I’ve written extensively about this problem)
The Attribution Problem: A Technical Reality
Here’s the brutal truth: current AI architectures fundamentally cannot attribute their outputs to specific training data.
When Claude or GPT generates text, that knowledge is diffused across billions of neural network parameters. There’s no metadata layer saying “this sentence came from The Guardian, that insight from Nature.” By design, attribution to pre-training data isn’t possible without a fundamental architectural shift—perhaps something like attaching metadata to model weights themselves.

This means the only reliable way to provide attribution right now is through explicit grounding: the AI must synthesize its answer after retrieving specific sources (search results → page content → generated answer). This is why Google’s approach of grounding in web search results is the right architecture for attribution, while generate-first approaches are technically incapable of fair compensation.
CAPS: Content Attribution Payment Scheme

Here’s a framework that realigns all stakeholder incentives:
The Three-Part Model
1. Micropayments for Grounded Content When an AI grounds its response in actual content retrieval—fetching and using a publisher’s article to generate an answer—that publisher receives a small licensing fee comparable to an ad click value. This isn’t charity; it’s paying for the intellectual property the AI is using in real-time.
2. Ad-Free Attribution Traffic The publisher doesn’t show ads on pages when users click through from AI-attributed results. Why? Because they’ve already been compensated through the micropayment. This improves user experience and removes the perverse incentive to maximize ad impressions over content quality.
3. Hyper-Contextual AI Answer Monetization AI platforms (Google, Microsoft, Anthropic, OpenAI) recuperate the cost of content micropayments by monetizing the AI answer itself through advertising. But these aren’t the intrusive banner ads users hate—they’re hyper-relevant ads matched to the exact query, at the exact moment of intent.

Why This Works: Aligned Incentives
Users get:
- Cognitive load reduction
- Quick, relevant answers
- Better ad experiences (contextually relevant, not visual spam)
Publishers get:
- Direct compensation for content use
- Sustainable business model independent of traffic volume
- Incentive to create high-quality, factual content that AI systems will use
Advertisers get:
- Hyperpersonalized leads
- Superior ROAS (reaching users at peak intent)
- Transparent attribution (they know exactly what query triggered the ad)
AI platforms get:
- Sustainable content ecosystem (publishers keep creating)
- Ad revenue that covers micropayments plus margin
- Reduced legal/regulatory pressure
The Flow: How CAPS Works
Traditional broken model:
Publisher creates content → AI trains on it → User asks AI → AI answers → Publisher gets nothing → Publisher dies
CAPS model:
User asks AI → AI searches/retrieves sources → AI generates grounded answer → Publisher receives micropayment → AI shows contextual ad → Advertiser pays → Revenue split → Everyone wins
Technical Considerations: What Needs to Happen
For the ML and infrastructure community to make this work, several pieces need to fall into place:
1. Grounding-First Architecture
AI systems must retrieve and ground before or during generation, not after. This is the only technically feasible way to provide reliable attribution with current technology. Generate-then-ground approaches are insufficient for fair compensation.
2. Attribution Tracking Infrastructure
We need robust systems to:
- Track which content was retrieved and used
- Measure the “contribution weight” of each source
- Handle micropayment distribution at scale
- Prevent gaming and fraud
The good news? This infrastructure is being built right now. Cloudflare’s Net Dollar initiative, Google’s Agents-to-Payments (AP2) protocol, and the X402 Foundation are all working on exactly this type of micropayment infrastructure.
3. Quality Filtering: A Solved Problem
How do we prevent low-quality or AI-generated spam from gaming the system to farm micropayments?
We don’t need to solve this—it’s already solved. This is a search quality problem, not an AI problem. Google, Bing, and other search engines have spent two decades building:
- Authority and trust signals (PageRank, backlink analysis)
- Spam detection algorithms (Panda, Penguin)
- Content quality classifiers
- E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) evaluation
- Manipulation detection systems
The AI layer sits on top of an already-filtered corpus. If content is spammy enough to game micropayments, it’s already being demoted by core search quality systems and won’t be retrieved for grounding in the first place.
4. Payment Calibration
The “comparable to an ad click” payment needs calibration:
For major publishers: Custom negotiated licensing deals (like Spotify with major labels). News Corp, Nine Entertainment, ABC, Guardian—these organizations will want structured agreements reflecting their scale and influence.
For everyone else: A tiered, transparent system based on:
- Content quality signals
- Domain authority
- Query competitiveness (high-value commercial queries might have higher micropayments)
- Attribution weight (primary source vs. supporting source)
This doesn’t need to be perfect on day one. It needs to be fair enough to be sustainable and transparent enough to be trusted.


The Australian Context
For Australian publishers, this is existential. Our media landscape is already concentrated, with News Corp and Nine dominating. Regional journalism is dying. The ABC is under constant budget pressure.
When international AI platforms harvest Australian content without compensation, they’re extracting value from our information ecosystem while contributing nothing back. This is particularly acute for:
- Regional news organizations barely surviving on thin margins
- Investigative journalism that requires significant investment
- Specialized B2B publishers serving niche professional communities
- Indigenous media outlets preserving and sharing culture
CAPS provides a framework where quality Australian content gets compensated regardless of traffic volume. A regional paper’s investigative report that AI uses to answer queries across the country gets paid—even if users never visit the site.
Current Momentum: The Pieces Are Moving
This isn’t just theoretical. Major infrastructure players are actively building the foundations:
Cloudflare’s Net Dollar – A micropayment system designed specifically for AI-driven internet interactions. Cloudflare processes ~20% of all web traffic; if anyone can implement universal micropayments, it’s them.
Google’s AP2 Protocol – Agents-to-Payments protocol for autonomous AI agents to transact with web services. This is Google acknowledging that the agentic web needs an economic layer.
X402 Foundation (Cloudflare + Coinbase) – Building open standards for AI-to-web payment infrastructure.
Content signals and AI policies – Cloudflare and others are developing standardized ways for publishers to signal usage preferences and pricing to AI systems.

These aren’t press releases—they’re actual technical infrastructure being deployed. The economic plumbing for CAPS is being installed right now.
What Needs to Happen Next
This is a call to the technical community, policy makers, and industry leaders:
For ML Researchers and Engineers
I’m not naive enough to think I can dictate technical architecture to you. Instead, I’m posing the challenge: How do we build reliable, scalable attribution systems that enable fair compensation?
Open questions:
- Can we develop metadata layers that track content contribution without generate-then-ground approaches?
- What novel architectures might enable training-data attribution?
- How do we measure “contribution weight” fairly across multiple sources?
- What anti-gaming mechanisms prevent micropayment fraud at scale?
For AI Platforms
Google, Microsoft, Anthropic, OpenAI—you have the power to implement this. You also have the motivation: regulatory pressure is mounting, litigation is expensive, and killing your content sources is unsustainable.
Early movers get goodwill and competitive advantage. Late movers get regulated.
For Publishers
Engage constructively. Yes, traffic is declining. Yes, AI feels threatening. But blocking AI is choosing irrelevance. CAPS provides a framework where your quality content generates sustainable revenue regardless of traffic patterns.
For Policy Makers
This needs guardrails and standards, but not heavy-handed regulation that stifles innovation. Focus on:
- Transparency in attribution and payment
- Anti-monopoly provisions (preventing only major publishers from accessing micropayments)
- Quality standards (ensuring payments go to legitimate content creators)
- Privacy protections (micropayments shouldn’t require invasive tracking)
Taking a Leadership Position
I’m putting this framework forward not because I think I can single-handedly move the needle—I’m a realist about my influence—but because the Australian SEO and digital publishing community needs a coherent technical vision to advocate for.
Too many agencies are peddling hot air and fluff about “AI disruption” without proposing actual solutions. Too many thought leaders are either doom-posting about AI destroying the web or blindly cheerleading innovation without acknowledging the economic damage.
CAPS is a concrete proposal. It’s technically feasible with current infrastructure. It aligns incentives. It preserves quality content creation while embracing AI’s benefits.
The conversation needs to move from “AI is ruining publishing” to “here’s how we build a sustainable AI-era content ecosystem.”
This is that conversation starter.
Leave a Reply