The AI Frontiers article (reproduced beneath) builds on a earlier Asimov Addendum article written by Tim O’Reilly, entitled: “Disclosures. I don’t assume that phrase means what you assume it means.” I (Ilan) assume it’s vital to first very briefly undergo components of Tim’s unique piece to assist recap why we—at the AI Disclosures Mission—care about protocols in AI markets:
On the AI Disclosures Mission, we’re more and more coming to see disclosures via the lens of networking protocols and requirements. Each networking protocol will also be considered a system of disclosures. However these disclosures are excess of only a warning label, or a mandated set of reviews. They’re a type of structured communication that permits unbiased, decentralized motion.
Tim then notes why this issues for AI’s “market construction”:
The race for first mover benefit by the big centralized AI suppliers like OpenAI and their enterprise mannequin of offering AI entry via metered API subscriptions suggests a hub and spoke railroad design, whereas a world of open weight AI fashions related by new modes of standardized communication might look extra like a highway system, or right now’s World Broad Net.
…
If we would like a world the place everybody, not simply AI mannequin builders and people constructing on prime of their centralized networks, is ready to innovate and to supply their work to others with out paying a tax to entry centralized networks, we want a system of disclosures that permits interoperability and discovery.
On this method, protocols, as a kind of disclosure, can architect more healthy AI markets, not after issues are already too far gone, however via it working as foundational “guidelines of the highway… that allow interoperability”:
Briefly, we have to cease considering of disclosures as some type of mandated transparency that acts as an inhibition to innovation. As an alternative, we should always perceive them as an enabler. The extra management rests with programs whose possession is proscribed, and whose habits is self and opaque, the extra permission is required to innovate. The extra now we have constructed “the rule of legislation” (i.e. requirements) into our programs, the extra distributed innovation can flourish.
Now, on to the primary course!
Protocols and Energy
As AI fashions grow to be commodities, tech giants are racing to lock in customers by hoarding their knowledge. Open protocols, backed by open APIs, can facilitate broad knowledge sharing and more healthy AI markets.
Initially printed in AI Frontiers: https://ai-frontiers.org/articles/open-protocols-prevent-ai-monopolies
Can we head off AI monopolies earlier than they harden? As AI fashions grow to be commoditized, incumbent Huge Tech platforms are racing to rebuild their moats on the software layer, round context: the sticky user- and project-level knowledge that makes AI purposes genuinely helpful. With the fitting context-aware AI purposes, every further user-chatbot dialog, file add, or coding interplay improves outcomes; higher outcomes entice extra customers; and extra customers imply extra knowledge. This context flywheel—a wealthy, structured user- and project-data layer—can drive up switching prices, making a lock-in impact when collected knowledge is trapped throughout the platform.
Protocols forestall lock-in. We argue that open protocols—exemplified by Anthropic’s Mannequin Context Protocol (MCP)—function a strong rulebook, serving to to maintain API-exposed context fluid and to stop Huge Tech from utilizing knowledge lock-in to increase their monopoly energy. Nonetheless, as an API wrapper, MCP can entry solely what a selected service (corresponding to GitHub or Slack) occurs to reveal via its API.
To completely allow open, wholesome, and aggressive AI markets, we want complementary measures that guarantee protocols can entry the total spectrum of person context, together with via:
- Assured entry, for approved builders, to user-owned knowledge, via open APIs at main platforms.
- Transportable reminiscence that separates a person’s agentic reminiscence from particular purposes.
- Knowledge utilization guardrails governing how AI companies can leverage person knowledge.
Drawing on the instance of open-banking laws, we present that safety and knowledge requirements are required for any of those proposals to be realized.
Architecting an open, interoperable AI stack via the protocol layer is about supporting broad worth creation quite than worth seize by a couple of corporations. Coverage efforts corresponding to the EU’s Common-Goal AI Code of Follow do matter; however, in the end, it’s software program structure that almost all instantly and decisively shapes market outcomes.
Protocols—the shared requirements that allow totally different programs talk with each other—operate as a deeper de facto legislation, enabling unbiased, decentralized, and safe motion in digital markets.

From Commoditized Fashions to Context-Wealthy Purposes
From fashions to companies. In a fevered race to blitzscale its approach to AI dominance, OpenAI took an early lead. ChatGPT turned the fastest-growing software in historical past, and it was simple to imagine that the following step was to show it right into a platform. OpenAI tried to grow to be a developer platform, first with plugins after which with its GPT Retailer.
Nevertheless it hasn’t all gone in line with plan. OpenAI’s fashions don’t appear so particular anymore. Open-source fashions like Kimi K2 (by Moonshot AI) have aggressive capabilities and are free to make use of. Sensing the turning tide, application-specific corporations like Perplexity struck gold by taking off-the-shelf fashions from a number of suppliers, scaffolding them for particular makes use of, and charging for premium entry whereas avoiding vendor lock-in. Cursor, an AI‑first code editor, went from $0 to over $100 million ARR in 18 months, proof that context‑pushed retrieval-augmented technology (RAG), with a local AI design, can beat incumbents sitting on extra person knowledge. Entrance-end customers can now simply select their most popular mannequin inside these purposes. And, utilizing platforms like OpenRouter, builders may even change fashions dynamically in response to pricing or options.
Context rising. As basis fashions commoditize, competitors is shifting up the stack, to the applying layer, the place proprietary person and challenge knowledge—referred to as context—is the key sauce. Tech giants are racing to surround and personal this context completely: dialog histories, reminiscence shops, workspaces, codebases, paperwork, and the rest that helps their brokers predict and help higher. OpenAI, Google, and different mannequin distributors lean on chatbot interplay logs as sources of persistent reminiscence, whereas software specialists like Anysphere (which makes Cursor) and Perplexity equally harness challenge and person knowledge to spice up their fashions’ usefulness.
This forces a vital determination available on the market: will AI purposes develop primarily based on closed requirements that allow a couple of gatekeepers dictate phrases and extract outsized rents, or on open requirements that maintain context moveable and structure permissionless?
The early open internet. The stakes are excessive. Born on open protocols, the online developed into ecosystems of purposes dominated by Amazon, Google, and Meta. At first, they beat rivals just by working higher. Google was the very best at matching searchers with data and advertisements; Amazon surfaced the very best merchandise at low costs; and Fb matched its customers with a singular feed crafted solely from content material shared by their pals and folks they selected to comply with.
From innovation to extraction. However success conferred sturdy energy that was abused. As progress slowed, the successful corporations shifted from creating worth to extracting it. In our previous work, we described this course of utilizing the language of financial rents: winners first acquire “Schumpeterian rents” for innovation, however, as soon as markets mature, these flip into extractive rents aimed toward preserving dominance and squeezing customers and builders. Cory Doctorow frames this course of vividly as “enshittification.” AI’s enshittification might contain weaker security guardrails, increased costs, much less person privateness, and lower-quality data or agentic help. Briefly, when industrial incentives go unchecked, fashions get tuned to serve suppliers’ pursuits over these of customers.
Makes an attempt by OpenAI to construct a platform by locking in builders and customers resemble Fb’s failed try to construct a platform. However, as Invoice Gates is mentioned to have commented: “This isn’t a platform. A platform is when the financial worth of everyone that makes use of it, exceeds the worth of the corporate that creates it. Then it’s a platform.” That type of platform is nearly at all times enabled by open requirements. In contrast, when an organization blocks others from linking suitable merchandise to its ecosystem, it incentivizes clients to make use of a number of companies without delay (`multi-homing’), and invitations further scrutiny from regulators.
The promise of protocols. Anthropic has taken a special route, creating MCP as an open protocol, a shared algorithm that anybody can use without spending a dime. MCP standardizes how AI purposes request data and actions from exterior companies, thereby facilitating equitable developer entry to exterior instruments and knowledge context. That is how networked markets develop: by enabling an structure of participation via which each and every new entrant makes the market extra useful for everybody else.
MCP’s take-up has been explosive. At present there are nicely over 5,000 MCP servers that can hook up with the tons of of AI apps which have built-in MCP. Confronted with fast adoption by third-party builders, AI mannequin builders like OpenAI and Google have introduced that they too will help MCP. However these identical incumbents are already pushing again.
How Person Context Is Powering a New Period of Tech Monopolies—and Competitors
Context creates worth. AI programs thrive on context: the person knowledge that lets an AI system tailor its habits to customers, their requests, and the duties at hand. When correctly mined, this person knowledge permits for personalised and environment friendly predictions. Consider a context-free, factory-settings AI mannequin as a borrowed telephone: the {hardware} is highly effective, however, with out your contacts, messages, location, and logins, it will possibly’t actually aid you.
Context has many layers: throughout time, as a dwelling “state,” such that every person immediate builds on what got here earlier than; and throughout individuals, as a multi-user setting (say, in a Slack thread or collaborative doc). We emphasize two layers: micro-context captures whom the system helps proper now (related to their preferences, language, and present question). However, macro-context covers the duty surroundings, because the exterior body that shapes what a wise reply appears to be like like. This consists of challenge recordsdata and dwell knowledge feeds.
Huge AI corporations are utilizing context to develop their moats and lock in customers via a minimum of two approaches. The primary is thru product bundling. Examples embrace OpenAI’s push into search, analysis, and coding (together with via acquisitions); Google’s threading Gemini into Workspace; Microsoft’s embedding Copilot throughout its 365 productiveness apps. Bundling aggregates the information floor and raises switching prices.
The second is thru constructing context as a central product characteristic. OpenAI now gives persistent reminiscence that shops private particulars (e.g., “has a toddler” or “identified with ADHD”) to form future replies. Meta has introduced it’ll acquire cross-site person knowledge to personalize its AI assistants. Google now remembers your writing type, so it will possibly tune its AI-generated Gmail replies. By binding the app and its context to the mannequin, corporations lock in customers and starve rivals. Such bundling is fertile floor for enshittification.
Importantly, this course of depends on Huge AI corporations’ gathering express person indicators—their prompts, docs, API calls—and distilling them into an inferred, implicit preferences profile that lets their mannequin ship extra related, environment friendly predictions inside every person’s distinctive workspace.
Can Protocols Create a Stage Enjoying Discipline?
The MCP pipeline. Anthropic’s MCP standardizes how AI purposes request instruments, knowledge, and actions from exterior companies via a common adapter. As an alternative of customized integrations for every pairing (Cursor → GitHub; Claude → Google Drive), any AI app (every one an MCP consumer) can use any MCP-compatible service (or MCP server), making fashions extra interchangeable. MCP additionally creates an agentic interface that permits an AI agent to determine what to do, primarily based on the language of duties, not endpoints. This reduces the MxN integration tax, permits small corporations to lease quite than construct tooling, and weakens vertical exclusives.

As a result of MCP is client-agnostic, any AI app can use any exterior service, which in flip makes switching between fashions far simpler — both by switching between mannequin service suppliers that help MCP, or by constructing an unbiased MCP consumer and utilizing any mannequin service. When an AI app’s context is moveable, fashions grow to be extra interchangeable.
MCP is the final word unbundler of context: any suitable AI app can attain any service that exposes an MCP server, permitting an enriched immediate to then be despatched to the mannequin. However companies should nonetheless choose in, by making their content material obtainable via APIs.
This shifts the aggressive gravity “up the stack,” away from the mannequin builders and to the applying that develops the successful context flywheel. App-level knowledge portability and governance—together with pricing, permissioning, and any preferential entry into Huge Tech–managed knowledge sources—then turns into the brand new battleground.
Though MCP reduces integration friction, interoperability alone doesn’t guarantee market competitors. We’ve seen this earlier than: open protocols like HTTP (for internet looking) and SMTP (for e mail) enabled permissionless entry of recent purposes, but markets nonetheless tipped. Google is now the dominant e mail and browser supplier due to its superior merchandise and cross-app integrations.
MCP’s Impression on the AI Market So Far
Incumbents have rushed to insert AI into each legacy product: the quickest go-to-market technique with the shallowest integration. Meta surfaces an assistant in almost each app. This has solely made constructing cleaner, MCP-enabled purposes much more enticing. AI-native instruments like Perplexity supply additional encouragement to builders, displaying that customers will decide a personalized expertise over a retrofitted one (just like the AI-layered Google Search).
Unsurprisingly, the variety of new MCP servers has rocketed, as we famous earlier. Nonetheless, such integrations can also be boosting utilization of incumbent mannequin builders’ chatbots as they acquire entry to extra instruments. MCP’s impression has been impeded by its weak safety. MCP servers’ exterior authentication and authorization stay a cussed MxN integration drawback. Furthermore, for repeated manufacturing workflows, code-based frameworks could also be extra environment friendly than an inference‑solely workflow.
Lastly, there are early indicators that AI mannequin builders might resist interoperability extra broadly, regardless of the elevated utilization it generates for them, if it finally ends up reinforcing the context moats for software builders. Anthropic quickly lower off the coding software Windsurf’s direct (first-party) entry to its high-performing Claude fashions. Windsurf was rising too widespread and was set to be acquired by OpenAI, a direct competitor to Anthropic.
MCP Versus Walled Gardens: The API Gatekeeping Downside
APIs are the gateway via which an MCP consumer—the AI purposes—can entry third-party knowledge and instruments, thereby breaking down a platform’s “walled backyard” of proprietary companies and datasets. However MCP can liberate context solely when a third-party service gives a sufficiently wealthy API (and retains it open). As a result of platform homeowners management these APIs, they’ve an incentive to constrain what MCP can contact, to guard their aggressive edge. This manifests in two methods:
- Entry threat. Providers can merely shut off API entry fully, or they will tremendously degrade entry. Current API paywalls and shutdowns at Reddit, Twitter, and Meta present how entry can vanish in a single day. Enterprise companies like Salesforce (which owns Slack), Atlassian, and Notion are now limiting API entry by Glean (a context platform) at the same time as they launch competing merchandise. In the meantime, Slack’s new API modifications (supposedly to restrict how LLMs are in a position to entry the app) will hurt builders usually.
- Context-depth threat (the “personalization hole”). Platform APIs expose posts and recordsdata however hardly ever the behavioral profiles that energy their very own personalization, leaving newcomers with a chilly‑begin handicap. Meta, for instance, personalizes its personal chatbot with Fb and Instagram historical past, nevertheless it gives third events neither its Graph API to fetch that full profile nor entry to detailed features of customers’ express and implicit (inferred) profiles. Equally, OpenAI’s “reminiscence” characteristic is confined to ChatGPT. OpenAI doesn’t enable builders to entry a person’s “recollections” through an API, even with the person’s prior consent.

To Save AI from Enshittification, Help Protocol-Stage Interventions
Enhancing protocols for the AI age. To interrupt API gatekeeping in AI markets, we want an structure that helps user-sanctioned knowledge portability as a way to improve third-party developer entry. Right here, portability means finish customers’ capacity to learn and switch their knowledge throughout platforms—or to permit different builders to take action on their behalf. When portability is common, builders can entry the identical context (via MCP or any API) with out negotiating bespoke offers. To operationalize this method for AI markets, we advocate:
- Open API entry for main platforms. If the information comes from the person, the person—and any developer the person authorizes—ought to be capable to take it elsewhere. We advocate requiring that, with person consent, main platforms expose this user-owned contextual knowledge via APIs to accredited builders at zero price. We suggest beginning with the platforms that management essentially the most person context: “gatekeepers” designated by EU standards, plus main AI mannequin suppliers.
Such an method might draw inspiration from the EU’s open-banking legislation (particularly, its Second Fee Providers Directive, or PSD2), which holds that banks should present licensed fintechs with free, real-time entry to core account knowledge and cost capabilities. Licensed builders should first get hold of a license by displaying correct safety and knowledge requirements. Not like banking’s standardized data, although, AI context spans code repositories, conversations, behavioral patterns, and preferences. Within the case of AI, markets and regulators would want to provide you with a manner of defining “core person context” for these numerous knowledge sorts and platforms.
- Reminiscence as a transportable service. Customers’ AI “reminiscence” must be accessible throughout platforms through APIs, with market-driven safety requirements embedded within the technical structure. Such MCP servers already exist, even when AI purposes don’t help it.
The problem is much less technical than socio-economic. Reminiscence is deeply private and requires safe data-handling, but AI markets presently lack requirements and accreditation in these areas.
A market-driven method could be to embed these safety requirements into technical structure, as is completed with the FDX API normal for US open banking. Such embedding permits for safe and standardized sharing of monetary knowledge between banks and third-party builders. Safety necessities like end-to-end encryption, OAuth-controlled entry to client-side keys, and granular topic-by-topic permissions are presently past MCP’s scope. However FDX’s safe and common API reveals what is feasible.
- Secure personalization, with out knowledge exploitation. Open APIs depend upon customers’ trusting builders to deal with shared context responsibly. Business-specific knowledge utilization guidelines would additionally weaken incumbents’ benefits whereas creating safer applied sciences. Such utilization guidelines might begin with:

- Knowledge firewalls. We advocate defending intimate person conversations from industrial concentrating on. An AI software leveraging a recognized person choice like “is vegetarian” for restaurant suggestions is useful; however exploiting therapy-like conversations for manipulative promoting have to be prevented.
- Erasure rights. Customers ought to be capable to assessment, edit, or delete their choice profiles and recollections at any time. ChatGPT already largely gives this.
- Privateness defaults. For delicate queries, we advocate that AI companies default to a non-public mode, with out long-term reminiscence enabled or advert concentrating on, except customers explicitly choose in to those settings for such queries.
Finally, management over person context—not uncooked mannequin energy—will determine who wins the AI industrial race. Open protocols can maintain context fluid between rivals, however they’re solely as efficient as the information (and instruments) that they will securely entry. The selection is ours: design aggressive AI markets round open ideas, or settle for a brand new technology of platform monopolies.
Because of Alex Komoroske, Chris Riley, David Soria Parra, Guangya Liu, Benjamin Mathes, and Andrew Trask for studying and/or commenting on this text. Any errors are ours.