24 Comments
User's avatar
AE Snow's avatar

Great piece, and it reminds me throughout of Seeing Like a State - which I think is actually the more relevant frame than Hayek here. I think the argument is correct but setting too high a bar. Infinite central planning may be impossible but there is a lot of room for 10xing current central planning.

A state equipped with vastly superior legibility tools, machine vision, sensor networks and language models could attempt sophisticated planning in domains that matter enormously for how communities develop and govern themselves and cause significant damage in the process

AE Snow's avatar

I ended up writing this thought up properly on how to apply Scott to the debate around planning and AI here for those interested: https://democraticfuturist.substack.com/p/the-pyramid-and-the-mesh?r=6a4bjp&utm_medium=ios

Hollis Robbins's avatar

Goodness Hayek is having a day! I just posted this yesterday and yes. https://hollisrobbinsanecdotal.substack.com/p/the-great-syllabus-stagnation

Jon Rowlands's avatar

The missing piece is how to price externalities: literally the things that the market alone can't price. Clean air, freedom, safety, trust. This is why Amazon and Walmart are too easy a model: they're able to price everything. In the wider context it takes something it's unpopular to admit that we need: bureaucrats. Specifically, people who build models to invent a price for externalities, based on things other than the market. The value of air pollution in Los Angeles, traffic congestion in New York, regulation in Dallas. Their job is to translate social priorities into prices, to bring them on board the market train. There's an argument that the proper role of regulation is to promote positive externalities and to reduce negative ones. If AI can help with anything, above what's already happening in Amazon and Walmart, it's to better understand the social choices we have, and play them out for us. This is fraught, for example models have completely failed to change the climate trajectory. But that's a separate problem.

Ivan Vendrov's avatar

> Driving is mostly about seeing things and moving your body in response. The environment is physically constrained and the action space is narrow. While complicated, it is markedly less ambiguous than navigating a dense web of human intentions and social meanings. AI excels at chess, but falters in complicated social reasoning games. The market is far more like the latter.

... predictions of the form "AI excels at X, but falters at Y" have not had a great decade. Social reasoning is indeed complicated, humans are very very well adapted to do it, but how long do you actually think it would take for the frontier labs to saturate a social reasoning benchmark if we made a good one? Same goes for alertness, automated price discovery, etc. Tacit knowledge as in knowledge embodied in muscle memory will take longer to automate (probably at least 10 years for the very most skilled manual work), less because the AI can't learn the relevant skills but more because we just don't have the hardware platforms.

But on to the central point - I think the arguments in the essay explain well why we won't end up in a totally centrally planned economy. However they don't exclude society becoming dramatically more centralized than it is today (which as I understand it is the Brynjolfsson & Hitzig argument). Neither the human body nor an ant colony are "centrally planned" - most of the work is done locally by individual cells. But the ant colony is 1000s of times less centralized in terms of information architecture. It seems clear that AI enables making the world much more like a single body and much less like the ant colony.

Quy Ma's avatar

AI systems look neutral but they're not. Someone decided what to measure, what to include, what to leave out. The system runs on those decisions. Because the decisions are hidden inside the infrastructure, nobody questions them. The bias is already baked in and whoever sets the defaults governs the behavior. AI training data is just defaults at a deeper level.

Leon Ingelse's avatar

Interesting story! I think there are different types of AI worth considering here. Today's mainstream AI, data-driven, probabilistic methods, are very different from optimisation/operations research methods that intend to translate business processes into mathematically solvable algorithms. These constraint-based methods are better suited to solve planning problems than LLMs, although I doubt whether they could do so for entire economies. Still, I think your text fails to recognise this difference, even though the problems you describe are in the optimisation field.

(For more info on the difference, this blog I wrote intends to explain the difference: https://blog.dotsandlines.ai/trustworthy-ai-in-optimisation-668b3f82997b)

Harry's avatar

Francis Spufford wrote a novel, “Red Plenty” about that 1950s Soviet attempt to use linear programming to run the entire economy. Good read.

Leon Ingelse's avatar

Interesting story! I think there are different types of AI worth considering here. Today's mainstream AI, data-driven, probabilistic methods, are very different from optimisation/operations research methods that intend to translate business processes into mathematically solvable algorithms. These constraint-based methods are better suited to solving planning problems than LLMs, though I doubt they could do so for entire economies. Still, I think your text fails to recognise this difference, even though the problems you describe are in the field of optimisation, imo.

(For more info on the difference, this blog I wrote intends to explain the difference: https://blog.dotsandlines.ai/trustworthy-ai-in-optimisation-668b3f82997b)

Valentyna Musina's avatar

This post resonates with two things currently on my mind: (1) the “who” behind the AI today, and how the moral compass/understanding the complexity of the context of the creator changes the direction of the technology and the nuance it considers as variables (as recent Anthropic vs Pentagon case reveals, not all the AI creators make the same AI models) and (2) does the author really matter (or is it dead in the case of AI as everywhere) and what really leads the markets is attention economy without any clear vision from anyone but emotional in mass/informed in minority decision-making.

And maybe a bit of a plan is not that bad. In the end, China has not lost faith in its version of planned economy, it just let it breathe a little with the perks of capitalism. Because indeed, by experiencing on my own life the impact of Soviet version of planning, I’m deeply convinced that was a disaster on too many levels.

Ethan Kreul's avatar

it’s a thoughtful contribution to the ongoing debate about AI coordination systems vs. price systems. The piece succeeds in reframing the issue from computation vs. markets to calculation vs. discovery.

Daryl Anderson's avatar

This substack post ( https://observertheory.substack.com/p/why-socialism-fails-a-computational) makes a highly-resonant claim (even to harkening back to Hayek and old-time Soviets).

Senchal errs, in my opinion, in claiming "socialism" as the culprit rather than simple "central planning" (CP) - and I took him to task in some substack replies (https://substack.com/home/post/p-188716149) but he gets points for rooting his argument deep in the forect of abstraction by his claim of CP as being effectively impossible via argument from NP-hard mathematics. Interesting stuff, though wrongheaded in its embrace of the "fallacy of misplaced concreteness."

His most recent post, projecting a substitution of "compute" for "currency," (https://observertheory.substack.com/p/when-compute-becomes-currency-ethics) will further tangle up the conversation.

Jesse Parent's avatar

Hmmm, interesting take on every good regulator theorem 1... I would say that the matter is less "ha ha commies were magical thinkers and so was (1950s) cybernetics", and more, the ballgame changes should the regulator be able to update itself in a meaningful fashion. Also, the derivatives of cybernetics then would support other points being made.

Re #2, yes... if you can eliminate the meaningful contribution of the "poor" and "working class", why bother? Robot ~ slave, and that's what will be competed against. The real big bad in all of this is clearly that.

I think the piece might have been a little stronger to actually emphasize such truths a bit more, because the problems the present will be more weighty than my opening or closing paragraphs. But I think that's sort of... you know. Slipping the grain of truth into what the audience is already agreeing on, no? So perhaps, well done, then.

Re #3.... embodied intelligence and modern phenomenology are working on those problems; the quote is true, but, as with #4, it seems ... not a very strong argument that that point will hold indefinitely or should be seen as never materializing. That isn't the same matter as "can LLMs lead to consciousness", for example, nor, from years ago, 'Deep Learning will Save Us'. Re 'which means the system can never fully escape human judgment' - agreed, but, many many many people are putting their lives work into making non-human judgement more relevant. You even have the complete other side, where folks are betting how shape a Worthy Successor to homo sapien intelligence, altogether!

#

1"It was that the economy a planner would need to model is constitutively shaped by the expectations and interpretive frameworks of the people who participate in it. Those frameworks shift in response to the very act of observation and intervention. There is no fixed economy waiting to be measured. The system that a planner would need to model is the same system that the plan would destroy."

2" But as we saw earlier, the AI alternative to traditional markets increasingly appeals to those who don’t hope for the final triumph of the proletariat."

3"This points to something general. A system that lacks our direct perceptual access to the world can detect statistical regularities within whatever framework it’s been given. It won’t recognize the moment when existing categories fail to capture what’s really happening, because its measure of “what matters” is determined by the framework. A reward function can’t evaluate its own weights. Only someone embedded in the situation – who personally feels the weight of competing, immeasurable human risks, obligations, and norms – is capable of making that judgment."

4"We make direct perceptual contact with the world in a way that AI can’t. We determine the very concepts needed to carve up the world intelligibly, invent new ones constantly, and make normative and aesthetic judgments all throughout. If the data is always post-conceptual, then every training pipeline inherits the tacit knowledge of whoever decided what to measure, which means the system can never fully escape human judgment, even in principle."

Lynne Kiesling's avatar

Hayek certainly is having a day! In addition to these great essays from Alex and Hollis, I posted on my newly published research paper on the market epistemology of AI.

https://knowledgeproblem.substack.com/p/hayek-had-one-agent-in-mind-now-there

Ryan Baker's avatar

I'd want to know your opinion on an agent based economy, like discussed in another Cosmos piece: https://blog.cosmos-institute.org/p/coasean-bargaining-at-scale

It's all well and good to critique central planning, it's certainly had it's bad days, but if this is a subtle endorsement of distributed agents as our economic agents, you need to confront the problems there: https://substack.com/@norabble/note/c-224144571.

I think it's somewhat naive to look at AI agents as an extension of the person, and assume this is going to work out well. You also have to consider how it might give powers over markets, and really think if the outcome of having that distributed leads to efficient and equitable outcomes.

I don't think I've made up my mind yet, but I do recoil a bit from the thought that the distributed model has already won, just because it fit the last era better.

Nicholas Gruen's avatar

In my reading of Hayek, I have always found him curiously bound by the original context in which he developed his thinking, which was from the mid-30s on. He was participating in debates in which the economy was understood in a pretty static way. As a result, he developed his epistemological arguments and his arguments about prices as providing information in a way that was very close to neoclassical comparative statics.

In his 1945 paper, which you quote, he talks about the way in which a market will transmit new information into prices, with most people in the market being unaware of the new information that is being encoded into changed prices and yet responding appropriately to what theh new prices say about changing relative scarcities. Though his framework is fairly well-suited to talking about the more organic innovation that happens in markets - with improvements to products and the emergence of new products for instance, he doesn't really focus on it. He will occasionally opportunistically point out that intervention in prices can also compromise the kind of organic innovation that is bred in markets. But he never really fully orients his thinking around that idea.

Likewise, he's happy to refer to the ways in which markets and the pricing within them are incentive compatible, and also occasionally suggest that they will encourage innovation. But he mostly focuses on comparative static adjustments - price rises, demand falls. Though his framework is broad enough to accommodate it, he doesn't really focus on the kind of innovation that Schumpeter does, perhaps because it clouds his arguments about pricing, mediating relative scarcity through time.

EcoCommerce Marketplace's avatar

This explains the epistemological case well, but stops one step short of the full diagnosis.

The problem isn't just that AI can't replace markets. It's that AI can't tell the difference between a market context, a hierarchy context, and a network context. It applies a single coordination logic (rule-following, compliance-seeking) to situations that require entrepreneurial alertness, or relational trust, or both simultaneously. That's not AI replacing the price system. That's AI governing the spaces between institutions with the wrong logic entirely.

Hayek, Ostrom, and Ouchi together tell us that H, M, and N are irreducibly distinct knowledge-generation systems. No single logic substitutes for the others. Current AI is coordination-blind to that distinction.

After two decades of working in wicked issues, I developed the architecture that addresses this - and then wrapped in LLMs last year. It's called GADGET, a governance framework that diagnoses which coordination logic a given context requires, detects misalignment, and constrains AI agents to operate within governance-appropriate boundaries.

What I concluded is that is it not complexity that generates wicked issues (Ostrom proved that over and over again), but misalignment of governing logics. And governance logics are not just institutional; they are biological in origin and nature.

As Wheeler stated more than a century ago, "Ants, like humans, can create civilizations without the use of reason." That does not mean we can't use reason, because without it, civilizations reach a reboot phase that is not pretty.

Nicholas Gruen's avatar

More generally, can you expand further on these two paragraphs.

"What I concluded is that is it not complexity that generates wicked issues (Ostrom proved that over and over again), but misalignment of governing logics. And governance logics are not just institutional; they are biological in origin and nature.

As Wheeler stated more than a century ago, "Ants, like humans, can create civilizations without the use of reason." That does not mean we can't use reason, because without it, civilizations reach a reboot phase that is not pretty."

Nicholas Gruen's avatar

Thanks for the comment on Ostrom and complexity. Can you elaborate on it please.

Not all, but a surprisingly large share of talk about complexity is arm waving, or to put it more pointedly — bullshit. Srsly.

https://nicholasgruen.substack.com/i/161545855/complexity-cliches-and-bullshit

EcoCommerce Marketplace's avatar

Complexity arises out of simple components that can generate and support multivariant relationships. In my 2014 book, "Shared Governance for Sustainable Working Landscapes" I proposed three principal causes of wicked issues, 1) systems whose outputs/outcomes vary in scope and scale and time, 2) Organizations that value and account for those outputs/outcomes using different means and criteria, and 3) Organizations that apply those different means and criteria using different governing logics.

What I believe I learned recently is that it is really just #3 that can create wicked issues, but I will keep 1 and 2 for now.

Governing logics are the "how" entities approach problem solving: hierarchy, market, and network logics. One can trace these back to humans' "modes of interactions" of authoritarianism, individualism, and egalitarianism, respectively. And follow them forward to the sectors of government, corporate, and NGO sectors, respectively. The sectors are the vehicles for the governing logic of groups of people.

I see complexity as a real thing beyond something just be complicated. People lean on complexity as the reason our institutions cannot solve today's issues. And that is true, to a degree, but more so, it is the institutions inability to manage complexity.

Ostrom discovered dozens or more of culture that were living within complex environments (#1) where people had different uses and values (#2), and that those cultures did not spiral into a wicked problem - but rather managed a wicked solution. They did this by alignment of the governing logics of H, M, N so that each logic and those that adopted them would not disrupt the other logics and people working on the other parts of the wicked whole.

Two recent articles that address this in more detail. The second article "Connecting AI Governance, Ant Colonies, and European History" is the one people might have more issues with.

https://medium.com/@ecocommerce/misaligned-governance-not-complexity-creates-wicked-problems-874e58a10f28

https://www.linkedin.com/pulse/connecting-ai-governance-ant-colonies-european-history-gieseke-ucwlc/?trackingId=g8RWrmzTJJZ%2Fygy25Y%2FaSQ%3D%3D

Nicholas Gruen's avatar

Thanks Timothy. As you no doubt know, Medium has taken your work and put it behind a paywall. I never understood their business model. I guess it was an early version of Substack's but as someone who's blogged for more than 2 decades, I was never going to be persuaded to pay for access to its network :)

I found the other one compelling.

EcoCommerce Marketplace's avatar

I usually click the open access button - I must have missed that again.