Google Just Quietly Dropped the Biggest Shift in Technical SEO Since Structured Data
I woke up this morning to an email from François Beaufort on behalf of the Chrome WebMCP Team via the Chrome Built-in AI Early Preview Program:
“Hi Web AI enthusiasts, We have a brand new early preview APIs for you to try, this time for the agentic web: WebMCP declarative API and imperative API. These APIs help agents to use websites in a more reliable and performant way, as compared to agent actuation alone.”
What is WebMCP?
WebMCP is a proposed web standard that lets websites expose structured tools for AI agents. Instead of an AI agent looking at your website, trying to figure out what buttons do and how your forms work — basically screen-scraping with intelligence — your site can just tell the agent directly: here’s what I can do, here’s how to do it, and here’s what I need from you.
Think of it like this. Right now, when an AI agent wants to book a flight on your site, it has to look at your page, work out what each field is for, figure out how your calendar picker works, and hope it gets the date format right. With WebMCP, your site just says: “I have a book_flight tool. Give me origin, destination, dates, and passenger count. Here are the formats I accept.”
There are two ways to implement it:
The Imperative API uses JavaScript. You register tools with navigator.modelContext.registerTool(), define a name, description, JSON input schema, and an execute callback. It’s programmatic and flexible.
The Declarative API is the one that made me sit up. You take your existing HTML forms and add a few attributes — toolname, tooldescription, toolparamdescription — and the browser automatically translates your form into a structured tool that any AI agent can understand and invoke. Your existing forms become agent-ready with minimal effort.
When an agent invokes a declarative tool, the browser brings the form into focus, populates the fields visually, and waits for user confirmation (unless auto-submit is enabled). There are CSS pseudo-classes (:tool-form-active) for styling the active form, events for lifecycle tracking, and a SubmitEvent.agentInvoked boolean so your code can distinguish between human and agent submissions.
It’s available behind a flag in Chrome 146 right now, and it’s being developed as an open web standard — not a Chrome-only feature.
You can read the full early preview documentation here: WebMCP Early Preview Documentation
It’s not tied to one model
An important detail: WebMCP is model-agnostic. It’s not a Gemini Nano thing. The demo extension actually uses Gemini 2.5 Flash via API, and the docs explicitly note it’s separate from Google’s “Gemini in Chrome” on-device features. The standard is designed so that any agent — whether it’s powered by Gemini, Claude, GPT, an open-source model, or whatever comes next — can discover and use these tools, as long as it’s operating through a browser.
This is a browser-level standard, not a model-level feature. That’s a big deal.
For a new generation of technical SEOs
Here’s where my mind really started racing.
Think about how technical SEO came to exist. Search engines needed structured signals to understand websites, so we got sitemaps, robots.txt, canonical tags, schema.org, meta descriptions. An entire discipline formed around making websites legible to crawlers. It created careers, agencies, entire companies.
WebMCP is the beginning of the same paradigm shift, but for AI agents instead of search crawlers.
Tool discoverability is the new indexing problem. The WebMCP docs actually call this out as an unsolved limitation — there’s currently no way for agents to know which sites have tools without visiting them first. The document hints that search engines or directories might fill this gap. When that discovery layer emerges, optimising for it will be a discipline in itself. You’ll want your tools found, understood, and preferred over competitors’.
Tool descriptions are the new meta descriptions. The quality of your tool’s name, description, and schema directly determines whether an agent selects it. The best practices section in the docs reads like conversion copywriting guidance — use clear verbs, explain the “why” behind options, prefer positive descriptions. Except the audience isn’t a human scanning search results. It’s a language model deciding which tool to call.
Schema design is the new structured data. Getting your JSON schemas right, choosing intuitive parameter names, returning descriptive errors so agents can self-correct — this is deeply technical work. The doc even recommends accepting raw user input rather than requiring the model to do transformations, and returning results only after the UI has updated so agents can verify execution. That level of nuance is exactly the kind of thing that separates good technical implementation from bad.
Agent conversion optimisation will be a thing. The Chrome extension already lets you test tools with an LLM to see if it invokes correctly with the right parameters. I can see a future where people A/B test tool descriptions, monitor agent success rates, and debug why an agent picked a competitor’s checkout tool over theirs. Agentic CRO, if you will.
The bigger picture is this: if commerce starts flowing through agents — “book me the cheapest flight to New York next Monday” — then the websites with well-structured, reliable WebMCP tools will capture that traffic. The ones without them won’t even exist in the agent’s decision space. That’s a familiar kind of existential pressure. It’s exactly what built the SEO industry.
The generation of technical SEOs who understand both traditional web standards and how language models parse tool definitions, how function calling works, what makes a schema easy for a model to use correctly — those people are going to be extremely valuable.
We’re watching the early days of a new layer of the web stack. If you’re in technical SEO, start paying attention now.

Leave a Reply