9 Comments
User's avatar
AE Snow's avatar
14hEdited

Great piece, and it reminds me throughout of Seeing Like a State - which I think is actually the more relevant frame than Hayek here. I think the argument is correct but setting too high a bar. Infinite central planning may be impossible but there is a lot of room for 10xing current central planning.

A state equipped with vastly superior legibility tools, machine vision, sensor networks and language models could attempt sophisticated planning in domains that matter enormously for how communities develop and govern themselves and cause significant damage in the process

Hollis Robbins's avatar

Goodness Hayek is having a day! I just posted this yesterday and yes. https://hollisrobbinsanecdotal.substack.com/p/the-great-syllabus-stagnation

Ivan Vendrov's avatar

> Driving is mostly about seeing things and moving your body in response. The environment is physically constrained and the action space is narrow. While complicated, it is markedly less ambiguous than navigating a dense web of human intentions and social meanings. AI excels at chess, but falters in complicated social reasoning games. The market is far more like the latter.

... predictions of the form "AI excels at X, but falters at Y" have not had a great decade. Social reasoning is indeed complicated, humans are very very well adapted to do it, but how long do you actually think it would take for the frontier labs to saturate a social reasoning benchmark if we made a good one? Same goes for alertness, automated price discovery, etc. Tacit knowledge as in knowledge embodied in muscle memory will take longer to automate (probably at least 10 years for the very most skilled manual work), less because the AI can't learn the relevant skills but more because we just don't have the hardware platforms.

But on to the central point - I think the arguments in the essay explain well why we won't end up in a totally centrally planned economy. However they don't exclude society becoming dramatically more centralized than it is today (which as I understand it is the Brynjolfsson & Hitzig argument). Neither the human body nor an ant colony are "centrally planned" - most of the work is done locally by individual cells. But the ant colony is 1000s of times less centralized in terms of information architecture. It seems clear that AI enables making the world much more like a single body and much less like the ant colony.

Jon Rowlands's avatar

The missing piece is how to price externalities: literally the things that the market alone can't price. Clean air, freedom, safety, trust. This is why Amazon and Walmart are too easy a model: they're able to price everything. In the wider context it takes something it's unpopular to admit that we need: bureaucrats. Specifically, people who build models to invent a price for externalities, based on things other than the market. The value of air pollution in Los Angeles, traffic congestion in New York, regulation in Dallas. Their job is to translate social priorities into prices, to bring them on board the market train. There's an argument that the proper role of regulation is to promote positive externalities and to reduce negative ones. If AI can help with anything, above what's already happening in Amazon and Walmart, it's to better understand the social choices we have, and play them out for us. This is fraught, for example models have completely failed to change the climate trajectory. But that's a separate problem.

Nicholas Gruen's avatar

In my reading of Hayek, I have always found him curiously bound by the original context in which he developed his thinking, which was from the mid-30s on. He was participating in debates in which the economy was understood in a pretty static way. As a result, he developed his epistemological arguments and his arguments about prices as providing information in a way that was very close to neoclassical comparative statics.

In his 1945 paper, which you quote, he talks about the way in which a market will transmit new information into prices, with most people in the market being unaware of the new information that is being encoded into changed prices and yet responding appropriately to what theh new prices say about changing relative scarcities. Though his framework is fairly well-suited to talking about the more organic innovation that happens in markets - with improvements to products and the emergence of new products for instance, he doesn't really focus on it. He will occasionally opportunistically point out that intervention in prices can also compromise the kind of organic innovation that is bred in markets. But he never really fully orients his thinking around that idea.

Likewise, he's happy to refer to the ways in which markets and the pricing within them are incentive compatible, and also occasionally suggest that they will encourage innovation. But he mostly focuses on comparative static adjustments - price rises, demand falls. Though his framework is broad enough to accommodate it, he doesn't really focus on the kind of innovation that Schumpeter does, perhaps because it clouds his arguments about pricing, mediating relative scarcity through time.

Quy Ma's avatar

AI systems look neutral but they're not. Someone decided what to measure, what to include, what to leave out. The system runs on those decisions. Because the decisions are hidden inside the infrastructure, nobody questions them. The bias is already baked in and whoever sets the defaults governs the behavior. AI training data is just defaults at a deeper level.

EcoCommerce Marketplace's avatar

This explains the epistemological case well, but stops one step short of the full diagnosis.

The problem isn't just that AI can't replace markets. It's that AI can't tell the difference between a market context, a hierarchy context, and a network context. It applies a single coordination logic (rule-following, compliance-seeking) to situations that require entrepreneurial alertness, or relational trust, or both simultaneously. That's not AI replacing the price system. That's AI governing the spaces between institutions with the wrong logic entirely.

Hayek, Ostrom, and Ouchi together tell us that H, M, and N are irreducibly distinct knowledge-generation systems. No single logic substitutes for the others. Current AI is coordination-blind to that distinction.

After two decades of working in wicked issues, I developed the architecture that addresses this - and then wrapped in LLMs last year. It's called GADGET, a governance framework that diagnoses which coordination logic a given context requires, detects misalignment, and constrains AI agents to operate within governance-appropriate boundaries.

What I concluded is that is it not complexity that generates wicked issues (Ostrom proved that over and over again), but misalignment of governing logics. And governance logics are not just institutional; they are biological in origin and nature.

As Wheeler stated more than a century ago, "Ants, like humans, can create civilizations without the use of reason." That does not mean we can't use reason, because without it, civilizations reach a reboot phase that is not pretty.

Nicholas Gruen's avatar

Thanks for the comment on Ostrom and complexity. Can you elaborate on it please.

Not all, but a surprisingly large share of talk about complexity is arm waving, or to put it more pointedly — bullshit. Srsly.

https://nicholasgruen.substack.com/i/161545855/complexity-cliches-and-bullshit

chris j handel's avatar

Nature is autogenerating healthy living inside each of us. Each desirable property of decentralization is autogenerating on the surface of our own natural networks. Each barrier to decentralization dissolves at the same surface. This is same as Hayek observing knowledge that matters is non-existing until a living society generates it.

Uncapturable abundance. Inviolable safety. Undiscoverable identity. Unmistakeable reputation. The natural network is the form where all specifications autogenerate simultaneously at each coupling surface at each cycle.

"The system that a planner would need to model is the same system that the plan would destroy." This quotation in the natural network is "The system is disabling planners and modellers while autogenerating healthy living for individuals and society."

The natural geometry of this network is Substrate Intelligence.