When Decentralization Fails
And how it succeeds

Francis Bacon’s New Atlantis describes a utopian island whose rulers rely on an institution called Salomon’s House for knowledge and discovery. This society of the learned conducted experiments under conditions of strict secrecy, deciding among themselves which of their findings they should share with the sovereign. Bacon spent much of his political career unsuccessfully trying to convince King James I to establish a real-world Salomon’s House.
Thomas Hobbes, who had served as Bacon’s secretary in the early 1620s, came to a much darker view of such institutions. In Leviathan, published a quarter of a century after his old employer’s death, he compared corporations that wielded such independent judgment to worms in the entrails of man, sapping the undivided sovereignty he thought essential to peace.
Questions about science, power, and accountability date back through centuries of political thought. In today’s world, a handful of companies control the compute, data, and frontier models that are restructuring how billions of people interact with the world. Existing institutions are struggling to keep up. The concentration of power in AI labs is now one of the defining political questions of the decade.
Many are unhappy about this development, with groups like the AI Now Institute, the Distributed AI Research Institute (DAIR), and the Algorithmic Justice League arguing that AI development as currently constituted is irredeemably centralizing. They believe that we need to relocate authority away from corporations and regulators towards the communities most affected by these systems. When policymakers look for alternatives to the status quo of corporate self-governance and light-touch regulation, these groups are frequently in the room.
Ideas around participatory AI governance draw on a deep intellectual tradition that integrates technology and power, dating back to nineteenth century anarchism and running through twentieth century American social theory. While elements of the diagnosis have force, both the analysis and the prescriptions suffer from fatal flaws that become even more acute in the AI age.
Anarchy and utopia
Marxism dominated nineteenth and twentieth century left-wing thought, but it had a serious rival. Where the Marxists wanted workers to seize state power and wield it as an instrument of transformation before allowing it to wither away, anarchists like Pierre-Joseph Proudhon and Peter Kropotkin argued that concentrated power always reproduces itself. A workers’ state would just produce a new ruling class of bureaucrats and party officials. The record of twentieth-century communism suggests they had a point.
Their alternative was a rejection of authority in favor of decentralized, self-governing communities. They quickly identified large-scale industrial production as part of the problem: the modern factory introduced managerial hierarchy and stripped the worker of control over what he made.
The early anarchists inspired a generation of twentieth century writers who reflected on technology and dominance in greater detail. The American sociologist Lewis Mumford was one of the first people to place a theoretical frame around the idea that a technology’s underlying logic could be inherently centralizing.
Mumford distinguished between two types of technology: ‘democratic technics’ that individuals could understand, maintain, and operate, and ‘authoritarian technics’, which are large-scale systems that subordinate individuals to their operational logic. In his account, medieval and early modern economic activity had been artisanal. It had been based on wood, water, and wind power, which were embedded in local communities. The rise of the factory system destroyed this and subordinated workers to machines.
Mumford’s work became progressively more pessimistic. In The Myth of the Machine, published in the late 1960s, Mumford wrote of the ‘mega machine’ – machines that were organized physical systems rather than human tools. The first example was the apparatus used by the Pharaohs to build the pyramids, which treated human labor as a raw material. Mumford believed that the modern bureaucratic state and corporation were both manifestations of the same phenomenon. He came to believe that these systems could not be dismantled from within and that the only way out was a society-wide ‘great refusal’.
Mumford’s work in part inspired Ivan Illich, an Austrian Catholic priest writing in the 1970s. Illich’s central argument was that some tools and institutions, as they scale, go from extending human capabilities to restructuring their environment so that the activity they were designed to serve relies on them. When this happens, the tool achieves a ‘radical monopoly.’ For example, the car produces sprawl and redesigns cities so that walking becomes impossible.
While you can break up a normal monopoly through anti-trust action, it is harder to make an unwalkable city walkable again. Illich argued that we had to keep the development of technology below a certain threshold. For example, in Energy and Equity, he argued that the speed of vehicles should be capped at 15 miles per hour, to prevent the bicycle – a technology that merely extended human capabilities – from being replaced by the car.
While Mumford and Illich provided much of the theory of technology, American social theorist Murray Bookchin contributed the political program. Bookchin extensively studied Ancient Greece, revolutionary Paris, Spanish anarchist collectives, and the politics of New England, and developed a model of how to organize decentralized, community-governed life.
Bookchin outlined a model where citizens’ assemblies at the municipal level make decisions by direct deliberation and majoritarian voting. They would oversee administration and economic life, such as land use and resource allocation. Municipalities would then federate into larger networks for coordination across regions, but ultimate sovereignty would always remain at local level.
Patterning justice
The main planks of the modern movement for participatory AI governance come from this lineage: the inherently centralizing force of technology, the need to keep capabilities below a certain threshold, and a federated system of governance. They began to enter academia in the 1970s and 1980s through the emerging field of Science and Technology Studies, which focused on how scientific knowledge was socially constructed. Works like Safiya Umoja Noble’s Algorithms of Oppression (2018) and Ruha Benjamin’s Race After Technology (2019) would add race as a central analytical category, which this earlier work had lacked.
The AI Now Institute, for example, has argued that: “AI as a field has been not just co-opted but constituted by the logics of a few dominant tech firms. It is no coincidence that the ‘bigger-is-better’ paradigm that dominates the field today…lines up neatly with the incentives of Big Tech.”
Activists routinely draw on three solutions.
The first is locating authority in the communities affected by AI systems, circumventing expert regulators or corporate self-governance, which they believe are biased and liable to capture. A Kenyan content moderator at an outsourcing firm used by Meta will know things about the conditions and psychological toll of their work that an AI ethics researcher in an American university can’t. Much like Bookchin studying patterns of governance around the world, DAIR conducted its Data Workers’ Inquiry, a global participatory research project, inspired by Marx’s 1880 inquiry into the conditions of the French working class. DAIR paid data workers in Kenya, Syria, Venezuela, and Germany to document their own conditions as community researchers
Second, abandoning corporate AI development altogether. In its place, we would see federated, community-owned AI infrastructure, employed to develop small, task-specific models. The infrastructure, training data, and resulting models would be treated as resources belonging to the communities that generate and are affected by them.
Finally, there are certain types of systems that should just not be built under any circumstances, because they will inevitably end up being coercive. The most common examples are usually facial recognition and autonomous weapon.
Imagined communities
Bluntly, there is a reason that the anarchists lost the battle of ideas on the left the first time around. Many of these proposals disintegrate in the face of reality and scale.
For a start, it’s often unclear what community governance means here. Which community? Defined by whom? With what boundaries? How do we weigh up impacts on different communities? Kropotkin and Bookchin had assumed that communities would be determined by geography, which is clearly not what modern-day activists mean.
Even if we can determine who the community is, neither Bookchin nor modern-day activists developed mechanisms to prevent capture by an organized or articulate minority. The people who show up to participatory design sessions are not a random sample of affected populations – they’re usually activists, academics, and professionals in the participation industry.
The retreat to a world of ‘communities’ also faces a serious coordination challenge. A federated commons of small, task-specific AI models requires compute infrastructure, which someone has to fund and maintain. It requires interoperability standards, which someone has to set and enforce. It requires data governance rules, which someone has to adjudicate when communities disagree. It requires protection from being outcompeted and absorbed by well-resourced corporate alternatives, which means either subsidies or regulatory barriers or both.
None of this can be coordinated by voluntary mutual agreement among communities, because they don’t have the resources, the technical capacity, or the legal authority. This was ultimately the charge Marx leveled at Proudhon: you cannot decentralize production by changing who owns it if the production process itself requires centralization.
Even if we again suspend all the practical arguments and accept that this is possible, a world of small, task-specific models comes with big trade-offs. As the authors of these proposals tend to view LLMs as ‘stochastic parrots,’ they can take refuge in a Mumfordian nostalgia for the small-scale and the artisanal. Their framework doesn’t account for the possibility that bigger models could produce genuine public goods, such as AlphaFold, so they simply disregard it.
It’s not that these activist groups are interpreting the ideas of Mumford, Illich, or Bookchin incorrectly. In fact, these proposals are very faithful renditions, which serve to highlight the flaws in the originals.
All three thinkers built their ideas on a romantic anthropology that treated decentralization and face-to-face deliberation as the natural human condition from which industrial modernity is a deviation. This is why the tradition can’t navigate trade-offs.
If your starting premise is that human flourishing is what happens when the megamachine gets out of the way, you don’t need to weigh the goods it produces, because they aren’t really goods. You don’t need a theory of when expertise is legitimate, because expertise is a symptom of the problem. You don’t need mechanisms against capture, because capture is what happens under the current system and will dissolve along with it.
The intellectual apparatus is structured to avoid the questions that a functional governance regime has to answer. What looks like a program for radical democracy turns out to be a refusal of the conditions under which democratic decisions about complex systems can be made at all.
A different tradition
The failure of the participatory alternatives doesn’t force us to accept centralization passively. There is a rich alternative tradition running through Alexis de Tocqueville, the American federalists, and the work of Elinor Ostrom. Where the anarchist tradition attempts to relocate power back to the community, this contrasting liberal tradition aims to ensure that no single locus of authority – whether state, corporation, or community – acquires comprehensive jurisdiction over any domain of life.
The liberal tradition rejects the anarchist conflation of freedom with decentralization. Tocqueville’s great insight was that a democratic community can be both decentralized and unfree, because of the social pressure to conform. Instead, the distinction between a free and an unfree society is determined by the institutional life within it.
When Tocqueville visited America, he was struck by its thick layer of overlapping, competing associations. He wrote of how “Americans of all ages, all conditions, and all dispositions, constantly form associations. They have not only commercial and manufacturing companies, in which all take part, but associations of a thousand other kinds – religious, moral, serious, futile, extensive, or restricted, enormous or diminutive.”
To most anarchist thinkers, institutions like professional bodies, religious organizations, and commercial associations should be regarded with suspicion. They are hierarchical, exclusionary, and reproduce entrenched interests, so should be replaced with direct participation.
By contrast, Tocqueville saw their partiality as their strength. Each has limited jurisdiction and none claims authority over the whole person. Citizens belong to many simultaneously, and the overlapping, competing claims create space in which individuals can appeal from one authority to another. Without this intermediary layer, the state would fill the gap. Its authority would be “absolute, minute, regular, provident, and mild,” and while it might not coerce people, it would “keep them in perpetual childhood,” as they lost their capacity to exercise free will.
Tocqueville was reporting on the constitution working as intended. The American federalists had started from the premise that concentrated power, even democratic power, tends toward its own abuses. Their response, set out most clearly in Federalist 51, was to divide authority so that no institution held comprehensive jurisdiction – “ambition must be made to counteract ambition” – and to leave space below the federal level for states, municipalities, and voluntary associations to govern their own affairs.
Averting the tragedy of the commons
Tocqueville described how such an arrangement looked from the outside. Others have since done the harder work of showing how it holds together.
Elinor Ostrom was a political economist who studied how communities around the world manage shared resources – fisheries, forests, irrigation systems, grazing lands – without either state regulation or privatization. The conventional wisdom, encapsulated in ecologist Garrett Hardin’s description of the “tragedy of the commons”, held that this couldn’t work.
Ostrom found that communities in Switzerland, Spain, Japan, the Philippines, and dozens of other settings had developed sophisticated, durable governance arrangements for shared resources, some lasting centuries. At the same time, Ostrom concluded that many commons governance arrangements failed. The examples that endured shared a number of common features.
The most important of these is that successful commons are governed by appropriators – people who use the resource and bear the consequences of how it’s governed.
The second is that the governance activity itself generates the knowledge required for governance to work. In her studies of fishing communities and irrigation systems, monitoring was done by users as a byproduct of their own activity. Conflict resolution was handled internally by people who understood the context, while violations surfaced information about the extent to which the rules were well-calibrated.
A third feature is redundancy: long-enduring commons tend to have multiple overlapping mechanisms for monitoring and enforcement, so that the failure of any one doesn’t cascade into general rule-breaking.
For AI, this framework points toward distributing governance across domain-specific intermediary institutions. For example, the relevant knowledge for governing AI in cancer medicine is held by oncologists. The oncologists don’t mentally store the information, waiting to deploy it, but generate it through their use of technology in context. A central regulator cannot have this knowledge, because it is produced through practice, not available prior to it.
In other words, governance must go where the knowledge is. This could be professional bodies, academic institutions, or open-source communities. They would each govern usage within the domain where their members have the requisite competence and stakes. Fortunately, most of these institutions already exist. They do not need to be designed from first principles or assembled by the participation industry.
While the vast majority of governance questions are deployment problems where domain-specific institutions have the advantage, there are a handful of bigger challenges that sit above this. Problems like the security of frontier model weights or thresholds for certain dangerous capabilities sit at a higher layer that require a degree of either state or interstate coordination.
This is why Ostrom wrote about “nested enterprises.” For example, in eastern Spain, farmers have managed shared water for close to a thousand years through a tiered structure. At the local level, irrigators’ associations allocate water within each canal and monitor compliance among their own members. Above them sits the Tribunal de las Aguas of Valencia, which has met every Thursday morning outside the Apostles’ Door of Valencia Cathedral for at least five centuries to resolve disputes between communities. The higher level didn’t replace local governance, but complemented it by providing the baseline rules that allowed it to function.
Ostrom extended this thinking to global problems late in her career. On climate change, the conventional view was that only an enforceable global treaty could work and that subnational efforts were a distraction. She argued the opposite. Letting cities, regions, nations and blocs cut emissions in parallel meant that different approaches were tried, more was learned, while the people making commitments were accountable to constituents who could see whether they kept them. Waiting for global consensus before allowing anything else to happen maximized risk for everyone.
The argument for a single framework or set of rules to break the power of the AI labs would fall into the same trap. By contrast, a polycentric world in which medical bodies, universities, industry associations, or open source communities independently develop their norms restores this redundancy.
Life on the frontier
No quantity of nested enterprises can resolve the production-side concentration of frontier AI. A handful of labs control the most powerful models, and no amount of deployment-side checks and balances can change that.
But a thick ecosystem of intermediary institutions on the deployment side creates countervailing power. The labs must satisfy many masters rather than capturing one regulator, or, as the anarchist model would have it, being replaced by a constellation of community-run alternatives that will never match their capabilities.
These checks could vary widely. A medical licensing body can refuse to certify practices built on tools its members judge unsafe. A malpractice insurer can price risk based on whether clinicians follow professional norms. A procurement officer can refuse to buy systems that don’t meet standards set by a professional body. None of this is regulation in the conventional sense, and none of it requires a legislature to act or a global governance regime to take shape. But collectively, these institutions force the labs to satisfy demands they didn’t set and can’t unilaterally override.
Freedom has never depended on power being small. It has depended on power being answerable to more than one authority at a time, and on citizens belonging to institutions that can push back on their own terms. The task ahead of us is building that intermediary layer.
Cosmos Institute is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund AI prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.


The failure mode you're describing is one I keep seeing in a very different context: individual decision-making. People often come to me having optimized everything locally, their habits, their knowledge, their relationships, but without any central principle that holds the system together. Every part is working. The whole is drifting. Decentralization fails for individuals for the same reason it fails for organizations: when there's no authority that can override local preferences in service of a larger coherence, the system doesn't collapse suddenly. It just slowly becomes incoherent. The question I keep returning to is not "how much structure is enough?" but "what is the structure in service of?" Without that answer, more coordination just produces a more organized kind of lost.
I was surprised to see Ruha Benjamin, AJL, and DAIR mentioned here by Cosmos and Chalmers. And yet, are all of these folks being bucketed as stochastic parrot type-whiners and a wonderful implication of "most anarchist thinkers?"
Help me out here, has Cosmos published much other discussion around such groups? I'm not the the most avid reader recently, but I'm curious what else has been said.
This might be worth a broader response. But at first pass, it's perhaps unfortunate for some of the valid points made about what layers need to be built "supported by" claims made in such a fashion. I guess Cosmos will inevitably draw such stoking phrasing given what it's dealing with, but, still. 'To speak "bluntly"', even.