CAPS: A Content Attribution Payment Scheme for the AI Era

by

in , ,

The Problem: A Broken Content Ecosystem

We’re watching the collapse of the web’s economic model in real-time, and everyone knows it.

AI assistants have fundamentally changed how people consume information. Why wade through ten articles when Claude, ChatGPT, or Gemini can synthesize an answer in seconds? Why maintain 100 browser tabs for research when AI can connect the dots for you? The user experience is undeniably better—not because AI provides better quality than human research, but because humans will always trade some quality for massive time and effort savings.

The numbers bear this out. Traditional search traffic is declining. Publishers are hemorrhaging ad revenue. Quality journalism is becoming economically unviable. Meanwhile, AI platforms are training on and retrieving from this very content to provide their valuable summaries—without the economic feedback loop that sustains content creation.

Here’s what we know about human behavior:

The current system has created a parasitic relationship: AI platforms extract value from content while publishers watch their business models crumble. Something has to give.

Why Current Solutions Don’t Work

Let’s examine the “solutions” being proposed:

Paywalls and robots.txt blocking Publishers can block AI crawlers, but this is economic suicide. If your content isn’t in the AI’s training data or retrieval systems, you become invisible to the next generation of users. You’re choosing between slow death (blocked from AI) and fast death (AI cannibalizes your traffic).

Litigation and licensing deals The New York Times sues OpenAI. News Corp signs deals with Google. These create a two-tier system: major publishers with legal teams get paid, everyone else gets exploited. It’s not scalable, it’s not fair, and it doesn’t solve the systemic problem.

Current ad models Traditional display advertising is already failing. The problem isn’t ads themselves—it’s the lack of true personalization and the low “right time, right place” factor. Most ads are visual pollution that users have learned to ignore or block.

Post-hoc citation bolting Some AI systems like Gemini use “generate-then-ground” approaches—they create an answer first, then try to find sources that support it. This is a bandaid solution that doesn’t truly attribute content and can’t reliably compensate creators. (I’ve written extensively about this problem)

The Attribution Problem: A Technical Reality

Here’s the brutal truth: current AI architectures fundamentally cannot attribute their outputs to specific training data.

When Claude or GPT generates text, that knowledge is diffused across billions of neural network parameters. There’s no metadata layer saying “this sentence came from The Guardian, that insight from Nature.” By design, attribution to pre-training data isn’t possible without a fundamental architectural shift—perhaps something like attaching metadata to model weights themselves.

This means the only reliable way to provide attribution right now is through explicit grounding: the AI must synthesize its answer after retrieving specific sources (search results → page content → generated answer). This is why Google’s approach of grounding in web search results is the right architecture for attribution, while generate-first approaches are technically incapable of fair compensation.

CAPS: Content Attribution Payment Scheme

Here’s a framework that realigns all stakeholder incentives:

The Three-Part Model

1. Micropayments for Grounded Content When an AI grounds its response in actual content retrieval—fetching and using a publisher’s article to generate an answer—that publisher receives a small licensing fee comparable to an ad click value. This isn’t charity; it’s paying for the intellectual property the AI is using in real-time.

2. Ad-Free Attribution Traffic The publisher doesn’t show ads on pages when users click through from AI-attributed results. Why? Because they’ve already been compensated through the micropayment. This improves user experience and removes the perverse incentive to maximize ad impressions over content quality.

3. Hyper-Contextual AI Answer Monetization AI platforms (Google, Microsoft, Anthropic, OpenAI) recuperate the cost of content micropayments by monetizing the AI answer itself through advertising. But these aren’t the intrusive banner ads users hate—they’re hyper-relevant ads matched to the exact query, at the exact moment of intent.

Why This Works: Aligned Incentives

Users get:

  • Cognitive load reduction
  • Quick, relevant answers
  • Better ad experiences (contextually relevant, not visual spam)

Publishers get:

  • Direct compensation for content use
  • Sustainable business model independent of traffic volume
  • Incentive to create high-quality, factual content that AI systems will use

Advertisers get:

  • Hyperpersonalized leads
  • Superior ROAS (reaching users at peak intent)
  • Transparent attribution (they know exactly what query triggered the ad)

AI platforms get:

  • Sustainable content ecosystem (publishers keep creating)
  • Ad revenue that covers micropayments plus margin
  • Reduced legal/regulatory pressure

The Flow: How CAPS Works

Traditional broken model:

Publisher creates content → AI trains on it → User asks AI → AI answers → Publisher gets nothing → Publisher dies

CAPS model:

User asks AI → AI searches/retrieves sources → AI generates grounded answer → Publisher receives micropayment → AI shows contextual ad → Advertiser pays → Revenue split → Everyone wins

Technical Considerations: What Needs to Happen

For the ML and infrastructure community to make this work, several pieces need to fall into place:

1. Grounding-First Architecture

AI systems must retrieve and ground before or during generation, not after. This is the only technically feasible way to provide reliable attribution with current technology. Generate-then-ground approaches are insufficient for fair compensation.

2. Attribution Tracking Infrastructure

We need robust systems to:

  • Track which content was retrieved and used
  • Measure the “contribution weight” of each source
  • Handle micropayment distribution at scale
  • Prevent gaming and fraud

The good news? This infrastructure is being built right now. Cloudflare’s Net Dollar initiative, Google’s Agents-to-Payments (AP2) protocol, and the X402 Foundation are all working on exactly this type of micropayment infrastructure.

3. Quality Filtering: A Solved Problem

How do we prevent low-quality or AI-generated spam from gaming the system to farm micropayments?

We don’t need to solve this—it’s already solved. This is a search quality problem, not an AI problem. Google, Bing, and other search engines have spent two decades building:

  • Authority and trust signals (PageRank, backlink analysis)
  • Spam detection algorithms (Panda, Penguin)
  • Content quality classifiers
  • E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) evaluation
  • Manipulation detection systems

The AI layer sits on top of an already-filtered corpus. If content is spammy enough to game micropayments, it’s already being demoted by core search quality systems and won’t be retrieved for grounding in the first place.

4. Payment Calibration

The “comparable to an ad click” payment needs calibration:

For major publishers: Custom negotiated licensing deals (like Spotify with major labels). News Corp, Nine Entertainment, ABC, Guardian—these organizations will want structured agreements reflecting their scale and influence.

For everyone else: A tiered, transparent system based on:

  • Content quality signals
  • Domain authority
  • Query competitiveness (high-value commercial queries might have higher micropayments)
  • Attribution weight (primary source vs. supporting source)

This doesn’t need to be perfect on day one. It needs to be fair enough to be sustainable and transparent enough to be trusted.

The Australian Context

For Australian publishers, this is existential. Our media landscape is already concentrated, with News Corp and Nine dominating. Regional journalism is dying. The ABC is under constant budget pressure.

When international AI platforms harvest Australian content without compensation, they’re extracting value from our information ecosystem while contributing nothing back. This is particularly acute for:

  • Regional news organizations barely surviving on thin margins
  • Investigative journalism that requires significant investment
  • Specialized B2B publishers serving niche professional communities
  • Indigenous media outlets preserving and sharing culture

CAPS provides a framework where quality Australian content gets compensated regardless of traffic volume. A regional paper’s investigative report that AI uses to answer queries across the country gets paid—even if users never visit the site.

Current Momentum: The Pieces Are Moving

This isn’t just theoretical. Major infrastructure players are actively building the foundations:

Cloudflare’s Net Dollar – A micropayment system designed specifically for AI-driven internet interactions. Cloudflare processes ~20% of all web traffic; if anyone can implement universal micropayments, it’s them.

Google’s AP2 Protocol – Agents-to-Payments protocol for autonomous AI agents to transact with web services. This is Google acknowledging that the agentic web needs an economic layer.

X402 Foundation (Cloudflare + Coinbase) – Building open standards for AI-to-web payment infrastructure.

Content signals and AI policies – Cloudflare and others are developing standardized ways for publishers to signal usage preferences and pricing to AI systems.

These aren’t press releases—they’re actual technical infrastructure being deployed. The economic plumbing for CAPS is being installed right now.

What Needs to Happen Next

This is a call to the technical community, policy makers, and industry leaders:

For ML Researchers and Engineers

I’m not naive enough to think I can dictate technical architecture to you. Instead, I’m posing the challenge: How do we build reliable, scalable attribution systems that enable fair compensation?

Open questions:

  • Can we develop metadata layers that track content contribution without generate-then-ground approaches?
  • What novel architectures might enable training-data attribution?
  • How do we measure “contribution weight” fairly across multiple sources?
  • What anti-gaming mechanisms prevent micropayment fraud at scale?

For AI Platforms

Google, Microsoft, Anthropic, OpenAI—you have the power to implement this. You also have the motivation: regulatory pressure is mounting, litigation is expensive, and killing your content sources is unsustainable.

Early movers get goodwill and competitive advantage. Late movers get regulated.

For Publishers

Engage constructively. Yes, traffic is declining. Yes, AI feels threatening. But blocking AI is choosing irrelevance. CAPS provides a framework where your quality content generates sustainable revenue regardless of traffic patterns.

For Policy Makers

This needs guardrails and standards, but not heavy-handed regulation that stifles innovation. Focus on:

  • Transparency in attribution and payment
  • Anti-monopoly provisions (preventing only major publishers from accessing micropayments)
  • Quality standards (ensuring payments go to legitimate content creators)
  • Privacy protections (micropayments shouldn’t require invasive tracking)

Taking a Leadership Position

I’m putting this framework forward not because I think I can single-handedly move the needle—I’m a realist about my influence—but because the Australian SEO and digital publishing community needs a coherent technical vision to advocate for.

Too many agencies are peddling hot air and fluff about “AI disruption” without proposing actual solutions. Too many thought leaders are either doom-posting about AI destroying the web or blindly cheerleading innovation without acknowledging the economic damage.

CAPS is a concrete proposal. It’s technically feasible with current infrastructure. It aligns incentives. It preserves quality content creation while embracing AI’s benefits.

The conversation needs to move from “AI is ruining publishing” to “here’s how we build a sustainable AI-era content ecosystem.”

This is that conversation starter.


Addressing the Hard Questions

Nick LeRoy raised several sharp questions that deserve direct answers. Some of these have clear solutions within the CAPS framework; others remain genuinely open problems.


“How would this work for govt properties, edus?”

Government and educational institutions present a unique case because they’re not profit-motivated content creators yet they produce enormous volumes of high-quality, authoritative content that AI systems heavily rely on.

The short answer: They don’t need to participate in micropayments the same way commercial publishers do.

Government content (.gov) is publicly funded and exists to serve citizens. If AI systems ground answers in ABS statistics, legislation.gov.au, or health.gov.au content, there’s no obvious injustice in that usage taxpayers already paid for it. The same logic applies to much educational content, particularly from public universities.

However, there’s a subtler issue: crowding out. If AI preferentially cites free government/edu content because there’s no micropayment cost, it creates a structural disadvantage for commercial publishers covering the same topics. A health journalism outlet investigating Medicare fraud competes against Medicare.gov for AI citations—but only one has bills to pay.

Potential solutions:

  • Exempt .gov/.edu from micropayments entirely (they’re already funded)
  • Weight commercial sources appropriately in retrieval to prevent free-content crowding
  • Allow institutions to opt-in if they want micropayments directed to specific programs (e.g., university research funding)

This is a policy design question more than a technical one. The framework accommodates it; the specifics require deliberation.


“I assume it benefits the established. If I start a new site, what threshold do I have to meet to start getting paid?”

This is a legitimate concern, and I won’t pretend CAPS magically solves the cold-start problem for new publishers.

The honest answer: Yes, established publishers have structural advantages. They have existing authority signals, backlink profiles, and brand recognition that make their content more likely to be retrieved and cited. A brand-new site won’t get micropayments on day one because it won’t be grounded in AI answers on day one.

But here’s the thing: this is already true in traditional SEO. New sites struggle to rank. New sites struggle to get traffic. New sites struggle to monetize. CAPS doesn’t make this worse it just transplants the existing competitive dynamics into a new economic model.

What CAPS does differently:

The threshold isn’t traffic-based, it’s citation-based. A new site with 100 monthly visitors that publishes genuinely novel, expert content could earn micropayments if AI systems retrieve and ground in that content. You don’t need massive scale; you need to be selected.

This actually favours niche expertise over content farms. A small site run by a genuine subject matter expert producing content that can’t be found elsewhere has a path to monetization that doesn’t require competing for head terms against major publishers.

What thresholds might look like:

  • Minimum domain authority/trust signals (spam prevention)
  • Content quality classifiers passing a baseline
  • Human review for new entrants above a certain payment threshold
  • Gradual trust-building similar to Google’s sandbox period

The goal is preventing micropayment fraud while not creating insurmountable barriers. This is solvable—ad networks already do similar onboarding for new publishers.


“If it’s all about quality > quantity, ads would have a much higher CVR but cost infinitely more?”

Let’s do the math.

Current model (simplified):

  • 1,000 searches → 100 ad clicks → 2 conversions
  • Advertiser pays $ 5 CPC = $ 500 total spend
  • 2 conversions at $ 250 each = $ 500 CPA

CAPS model (hypothetical):

  • 1,000 AI queries → 50 see contextual ads → 10 conversions
  • Fewer impressions, but each is hyper-targeted at peak intent
  • Same $500 budget, but 10 conversions = $50 CPA

The question isn’t whether prices go up or down it’s whether value per dollar improves. If advertisers get 5x the conversions for the same spend, they’ll pay more per interaction but less per outcome.

Does CPC go up? Probably yes, significantly.

Does CPA go down? That’s the bet. If AI-contextual ads convert at dramatically higher rates (because they’re matched to explicit intent, not inferred intent), the economics can work even with fewer total interactions.

This is Google’s implicit thesis with AI Mode: compress the funnel, increase conversion rate, maintain or grow advertiser value even with fewer clicks.


“Does this establish a price floor based on ‘good’ value?”

Nick’s example: a $2k mattress company might pay $500 for visibility across hyper-focused prompts (assuming 4:1 ROAS target). Or maybe $50/click to offset reduced volume.

Both models could coexist:

Impression/visibility pricing makes sense for brand-building and consideration-stage queries. “Best mattress for back pain” might show a contextual ad from Koala or Sleeping Duck not expecting immediate conversion, but establishing presence at a high-intent moment.

CPC/CPA pricing makes sense for transaction-ready queries. “Buy Emma mattress king size Sydney” is a different beast and once AI agents start completing transactions (via AP2), this becomes a transaction fee, not an ad fee.

The price floor question is real. If an AI answer satisfies a query with no ad shown, there’s no revenue. If the ad is shown but not clicked, current CPC models generate nothing. This pushes toward:

  • CPM-style pricing (pay for visibility, not clicks)
  • Hybrid models (base fee for inclusion, bonus for conversion)
  • Transaction fees for agent-completed purchases

Google will experiment. The market will find equilibrium. But Nick’s instinct is right: the pricing model must evolve beyond pure CPC.


“Google won’t sacrifice revenue for a ‘better experience.’ Ads are their golden goose.”

Correct. But here’s the reframe: Google doesn’t need to sacrifice revenue, they need to maintain it through a different mechanism.

Google’s ad revenue comes from being the intent layer between users and outcomes. That position doesn’t disappear in an agentic world it transforms. Instead of:

User searches → sees ads → clicks → converts on merchant site

It becomes:

User asks AI → AI recommends/selects → AI completes transaction → Google takes cut

The golden goose isn’t “ads” specifically it’s monetizing intent. AI Mode and agentic search are just new surfaces for the same underlying business: connecting demand to supply and extracting margin.

Google’s risk isn’t that they’ll sacrifice revenue for experience. It’s that they’ll fail to build the new monetization layer fast enough and watch OpenAI/Anthropic/others capture that value instead.


“Visibility/reporting will be key to them pivoting to any new versions of ads”

Absolutely. This is non-negotiable for advertiser adoption.

Advertisers need:

  • Prompt-level analytics (what queries triggered their ad/citation)
  • Attribution clarity (did the AI recommend us? Were we selected?)
  • Conversion tracking (from AI impression to transaction, even if multi-step)
  • Competitive visibility (who else was cited/recommended)

Think of it as Google Search Console for LLM visibility which is precisely what several companies (including us at DEJAN) are building. Google will need to provide this natively for AI Mode, or third-party tools will fill the gap.

Without this transparency, advertisers can’t optimise. Without optimisation, they can’t justify spend. Without spend, the economic model collapses.

This is solvable. The data exists it’s a product and API question, not a fundamental barrier.


“Maybe this is where LLMs have an advantage—no baseline returns, can start cheap like early Facebook ads?”

Nick is onto something important here.

Google’s constraint: They’re defending $200B+ in annual ad revenue. Every product decision is evaluated against “does this cannibalise search ads?” This creates institutional paralysis. AI Mode should cannibalise traditional search, that’s the point, but the internal politics of protecting the cash cow slow everything down.

OpenAI/Anthropic’s advantage: No legacy revenue to protect. They can price micropayments and ads aggressively to capture market share. If Claude becomes the default interface for a generation of users, Anthropic can monetise later at scale. The Facebook playbook: grow first, monetise second.

But there’s a counterargument:

Google has the grounding infrastructure (Search), the advertiser relationships (millions of active accounts), the payment rails (Google Ads billing), and the trust signals (two decades of spam fighting). Standing up a competing ad ecosystem from scratch is brutally hard, ask anyone who’s tried.

OpenAI’s deal with Microsoft helps, but they’re still building the commercial infrastructure Google has in production.

My bet: The next 2-3 years are a window where OpenAI/Anthropic can establish themselves as alternatives to Google’s ad ecosystem. If Google executes well on AI Mode monetisation, that window closes. If they fumble it (which is possible, they’re a big company with legacy constraints), the insurgents capture real share.

The pricing advantage is real but temporary. Use it or lose it.


What Remains Open

Some questions don’t have clear answers yet:

  1. Exact micropayment rates. What’s fair? What’s sustainable? This needs market discovery.
  2. International complexity. CAPS assumes a relatively unified system. Reality involves different copyright regimes, privacy laws, and payment infrastructures across jurisdictions.
  3. Gaming and fraud at scale. Search quality filters help, but determined adversaries will find exploits. Ongoing enforcement is required.
  4. User acceptance of AI-embedded ads. Will users tolerate ads in AI answers? Or will they flee to ad-free alternatives?
  5. The transition period. How do we get from here to CAPS? Who moves first? What’s the adoption curve?

Comments

2 responses to “CAPS: A Content Attribution Payment Scheme for the AI Era”

  1. Brilliant article Dan! Thanks for putting in the effort to put forward some real solutions for how to compensate content creators in the age of AI.
    Very inspiring to see. Keep up the great work.

    1. Thank you Marc! I hope to see this happen in practice in the near future.

Leave a Reply to Dan Petrovic Cancel reply

Your email address will not be published. Required fields are marked *