google-review-An Architecture of Participation for AI?
About six weeks ago, I sent an email to Satya Nadella complaining about the monolithic winner-takes-all architecture that Silicon Valley seems to envision for AI, contrasting it with “the architecture of participation” that had driven previous technology revolutions, most notably the internet and open source software. I suspected that Satya might be sympathetic because of past conversations we’d had when his book Hit Refresh was published in 2017.
I made the case that we need an architecture for the AI industry that enables cooperating AIs, that isn’t a winner-takes-all market, and that doesn’t make existing companies in every industry simply the colonial domains of extractive AI conquerors, which seems to be the Silicon Valley vision.
Little did I know that Microsoft already had something in the works that was a powerful demonstration of what I was hoping for. It’s called NLWeb (Natural Language Web), and it’s being announced today. Satya offered O’Reilly the chance to be part of the rollout, and we jumped at it.
My ideas are rooted in a notion about how technology markets evolve. We have lived through three eras in computing. Each began with distributed innovation, went through a period of fierce competition, and ended with monopolistic gatekeepers. In the first age (mainframes), it was IBM, in the second (PCs) Microsoft, and in the third (internet and mobile) the oligopoly of Google, Amazon, Meta, and Apple.
The mistake that everyone makes is a rush to crown the new monopolist at the start of what is essentially a wide open field at the beginning of a new disruptive market. When the personal computer challenged IBM’s hardware-based monopoly, the race was to become the dominant personal computer hardware company. Microsoft won because it realized that software, not hardware, was the new source of competitive advantage.
The story repeated itself at the beginning of the internet era. Marc Andreessen’s Netscape sought to replace Microsoft as a dominant software platform, except for the internet rather than the PC. AOL realized that content and community, not software, was going to be a source of competitive advantage on the internet, but they made the same mistake of assuming the end game of consolidated monopoly rather than embracing the early stage of distributed innovation.
So here we are at the beginning of the fourth age, the age of AI, and once again, everyone is rushing to crown the new king. So much of the chatter is whether OpenAI or one of its rivals will be the next Google, when it looks to me that they are more likely the next Netscape or the next AOL. DeepSeek has thrown a bomb into the coronation parade, but we haven’t yet fully realized the depth of the reset, or conceptualized what comes next. That is typically figured out through a period of distributed innovation.
The term “the architecture of participation” originally came to me as an explanation of why Unix had succeeded as a collaborative project despite its proprietary license while other projects failed despite having open source licenses. Unix was designed as a small operating system kernel supporting layers of utilities and applications that could come from anyone, as long as they followed the same rules. Complex behaviors could be assembled by passing information between small programs using standard data formats. It was a protocol-centric view of how complex software systems should be built, and how they could evolve collaboratively. The internet was also developed as a similar distributed, protocol-based system.
That concept ran through my web advocacy in the early ’90s, open source advocacy in the late ’90s, and Web 2.0 in the aughts. Participatory markets are innovative markets; prematurely consolidated markets, not so much. The barriers to entry in the early PC market were very low, entrepreneurship high. Ditto for the Web, ditto for open source software and for Web 2.0. For late Silicon Valley, fixated on premature monopolization via “blitzscaling” (think Uber, Lyft, and WeWork as examples, and now OpenAI and Anthropic), not so much. It’s become a kind of central planning. A small cadre of deep-pocketed investors pick the winners early on and try to drown out competition with massive amounts of capital rather than allowing the experimentation and competition that allows for the discovery of true product-market fit.
And I don’t think we have that product-market fit for AI yet. Product-market fit isn’t just getting lots of users. It’s also finding business models that pay the costs of those services, and that create value for more than the centralized platform. As Bill Gates famously told Chamath Palihapitiya when he was running the nascent (and ultimately failed) Facebook developer platform, “This isn’t a platform. A platform is when the economic value of everybody that uses it, exceeds the value of the company that creates it. Then it’s a platform.”
To be clear, that is not just value to end users. It’s value to developers and entrepreneurs. And that means the opportunity to profit from their innovations, not to have that value immediately harvested by a dominant gatekeeper. We may expect that in the later, enshittified stage of the market, when it is ripe for disruption, but we don’t want to see it early on!
That’s why I’ve been rooting for something different. A world where specialized content providers can build AI interfaces to their own content rather than having it sucked up by AI model builders who offer up services based on it to their own users.
As far back as our 2017 conversation, Satya had referred to AI as “the third runtime.” That is, Windows was the first mass market software runtime; the web was the second. (Arguably, mobile was the third but for this purpose, let’s consider it an extension of the second.) What might that third runtime look like? A big centralized platform? I hope not. Companies such as Salesforce and Bret Taylor’s Sierra are betting on agents that are frontends to companies, their services, and their business processes, in the same way that their websites or mobile apps are today. Others are betting on client-side agents that will access remote sites, but often by calling APIs or even performing the equivalent of screen scraping.
Anthropic’s Model Context Protocol, an open standard for connecting AI agents and assistants to data sources, was the first step toward a protocol-centric vision of cooperating AIs. It has generated a lot of well-deserved enthusiasm. Google’s A2A is a futuristic vision of how AI agents might cooperate. This is all going to take years to get right.
But all companies need at least a start on an AI frontend today. There’s a fabulous line from C. S. Lewis’s novel Till We Have Faces: “We cannot see the gods face to face until we have faces.” Right now, some companies are able to offer an AI face to their users, but most do not. NLWeb is a chance for every company to have an AI interface (or simply “face”) for not just their human users but any bot that chooses to visit.
NLWeb is fully compatible with MCP but offers something much simpler, and for existing websites, much more immediately actionable: a simple mechanism to add AI search and other services to an existing web frontend. We put together our demo AI search frontend for O’Reilly in a few days. We’ll be rolling it out to the public soon.