AI won’t fix central planning
Even superintelligence needs a price
In 1962, Victor Glushkov pitched the Soviet authorities on a nationwide cybernetics network to solve the oldest problem in socialist economics: how to allocate resources without private property and market prices. Washington was alarmed enough for the CIA to create a special taskforce.
Just as Soviet industrialization was giving way to stagnation, the first serious computers gave the planned economy a shot in the arm. The central planners believed that they’d struggled to process information fast enough, so increased computing power promised a solution.
In 1969, the Polish economist Oskar Lange described the market as “a computing device of the pre-electronic age,” suggesting we could put “simultaneous equations on an electronic computer and obtain the solution in less than half a second.” In 1971, Chile’s socialist government attempted a version of this managed via telex machine, only for it to be cut short by the coup of 1973.
Thirty years after the fall of the Soviet Union, enthusiasm for these ideas has returned. In 2016, Jack Ma predicted that “the planned economy will become increasingly big … because with access to all kinds of data, we may be able to find the invisible hand of the market.” Marxist economists have written with surprising enthusiasm about Walmart and Amazon, viewing them as technologically-enabled planned economies.
The greatest excitement has been reserved for advanced AI. Zvi Mowshowitz has argued that AI “can embody the preferences and knowledge of many or even all humans, in a way an individual human or group of humans never could.” Meanwhile, Erik Brynjolfsson and Zoë Hitzig have made the case that, by combining immense processing capacity with the ability to codify tacit knowledge through computer vision, language, and sensor data, AI could erode the traditional advantages of decentralization.
The optimists attack the case for traditional markets and decentralization from multiple directions: AI can match or exceed the information-processing advantages of markets, capture knowledge embedded in human judgment, simulate competition without running it, assess outcomes markets model badly through proxies, or simply replace the human participants whose limitations created the problem in the first place.
Despite their diversity, many of these arguments fall into the same traps. They routinely misstate the case for decentralization and flatten the distinction between different kinds of knowledge, while treating any unsolved problems as an engineering detail.
The pursuit of knowledge
The most influential case against central planning was made on epistemological grounds by Friedrich Hayek. Oskar Lange and many of his successors read him as making a simple point about transmission: the useful knowledge in any economy is spread across millions of minds, so no central authority can collect it fast enough to act on it.
This is dangerously wrong. In “The Use of Knowledge in Society,” Hayek distinguishes between two different types of knowledge. The first is scientific or theoretical knowledge, which can be stated in general rules or principles. In theory, this kind of knowledge could be effectively concentrated in a single mind or system – arguably, LLMs already do this very well.
The second type is what he calls “knowledge of the particular circumstances of time and place.” This is knowledge that is embedded in practice, judgment, and context. For example, a farmer may know that a specific field drains poorly in its southeast corner or a sales rep may notice a change in body language with a long-standing client. This kind of knowledge is derived from the experience of a specific context, rather than from theoretical training or by performing a regression analysis. The person who has this knowledge will frequently struggle to explain how they acquired it.
This concept of tacit knowledge was expressed in more detail by Michael Polanyi, a chemist turned philosopher of science. Polanyi famously formulated tacit knowledge as the idea that “we can know more than we can tell.” There are lots of things we can do that we would struggle to articulate. The theoretical account of riding a bike – adjusting angular momentum through micro-corrections in steering – bears little relationship to the knowledge that you possess and exercise when doing it.
Polanyi’s view is that tacit knowledge is not just knowledge that happens to be unstated, but instead has a distinctive architecture.
Imagine a bank manager in a meeting with a local business owner, asking for an extension to their loan. In the course of this interaction, she senses that something isn’t right.
The bank manager is picking up on a series of cues, such as the business owner’s posture, the rhythm of his speech, or the differences from their past interactions. The bank manager experiences all of this as a single act of perception, rather than a series of data points. If we tried to unpick this knowledge and asked the bank manager to list out all the data points she picked up on, it would be akin to asking a pianist to state the precise angle of each finger while she’s playing.
Of course, the planner at this point could argue that even if the bank manager can’t articulate these cues, we could train a model across video, audio, and biometrics to detect the same patterns she’s detecting. The model doesn’t need to have the same experience as her, it just needs to produce the same outputs. For example, driving was long held up as an example of inalienable tacit knowledge; Polanyi himself argued that “the skill of a driver cannot be replaced by a thorough schooling in the theory of the motorcar.” Despite this, we now have highly performant self-driving cars.
This only tells us so much. Driving is mostly about seeing things and moving your body in response. The environment is physically constrained and the action space is narrow. While complicated, it is markedly less ambiguous than navigating a dense web of human intentions and social meanings. AI excels at chess, but falters in complicated social reasoning games. The market is far more like the latter.
The distinction here goes beyond difficulty. A car navigates spatial relationships and physical dynamics, whereas what the bank manager does is categorically different: she is interpreting, drawing on a framework of meaning built up through years of situated experience that organizes her perception before any calculation begins. That framework is the structure through which the interaction becomes intelligible to her at all. More processing power has no bearing on a gap like this.
By being situated in both the conversation and the wider social context, our bank manager also has a few other advantages versus an impersonal system. For a start, she has skin in the game. If she gets this call wrong, her reputation and business could suffer; in extreme cases, her physical safety could be at risk. Secondly, unlike a system observing a bunch of cues, she is an active participant in the interaction. Her tone and method of formulating questions are all part of the interaction. Finally, she’s situated in the community. She knows both the social norms and the realities of running a business in that area.
The snake devours its tail
Eventually, we get stuck in a loop. Even if we could train an AI on these interactions, what would the training data consist of? Someone has to decide to record certain things and not others. For example, we may include the transcript of the conversation, some financial metrics, and the outcome of the loan, but not the handshake or some of the pauses between words. The data is already a selective compression of the interaction, shaped by prior human decisions about what matters.
The tacit knowledge that made those framing decisions is invisible to the system trained on their outputs. No dataset encounters raw reality. It arrives pre-shaped by decisions about what to measure and what to discard. When you tell the system to look for indicators of trustworthiness, you’ve already decided what the relevant features are, which is precisely the judgment you were hoping the system would replicate.
We make direct perceptual contact with the world in a way that AI can’t. We determine the very concepts needed to carve up the world intelligibly, invent new ones constantly, and make normative and aesthetic judgments all throughout. If the data is always post-conceptual, then every training pipeline inherits the tacit knowledge of whoever decided what to measure, which means the system can never fully escape human judgment, even in principle.
The optimist could object here. Perhaps a reinforcement learning function could create something functionally equivalent to skin in the game. A dynamic AI system interacting with clients would also develop its own tone and method of probing. Big enough systems don’t simply compute over pre-specified data – they learn from experience and may develop something akin to internal world models. They may absorb tacit knowledge wholesale without anyone specifying what to look for.
Even granting all of that, the question is whether it can replicate the model of engagement that produces our bank manager’s particular sensitivity. She must sit inside the tension between reputational risk, relationship preservation, institutional obligation, and commercial judgment without the ability to collapse them into a single metric. This is what makes her attention so acute – she can’t keep everyone happy.
This points to something general. A system that lacks our direct perceptual access to the world can detect statistical regularities within whatever framework it’s been given. It won’t recognize the moment when existing categories fail to capture what’s really happening, because its measure of “what matters” is determined by the framework. A reward function can’t evaluate its own weights. Only someone embedded in the situation – who personally feels the weight of competing, immeasurable human risks, obligations, and norms – is capable of making that judgment.
Information versus action
The economist Israel Kirzner tells the story of a mother struggling with a teething child. The mother has tried everything she can to soothe or pacify the child, but to no avail. A travelling salesman knocks on her door and offers her a colorful toy at a price of five dollars. Her child is delighted and calms down. On closer inspection, she is dismayed to realize that the toy is nothing more than a collection of marbles in a clear plastic container – something that she could have assembled in her kitchen for less than a dollar. She “could kick herself for not having done so.”
On one level, this is understandable. The mother knew she had marbles in her kitchen and had the wherewithal to put them in a container, but this knowledge “did not inspire her to action.” In Kirzner’s framing, she had information-knowledge (all the facts), but not action-knowledge (the alertness to act on the information).
Even if you build a system that can collect every production function and every consumer preference in the economy, all you have done is assemble a comprehensive stock of information-knowledge. Alertness is hard to program, because it isn’t a process of inference from known data. A successful entrepreneur acts speculatively in response to a suspected opportunity, which by definition, hasn’t already been recognized.
Hayek went further. He observed that much of the knowledge that matters for economic coordination doesn’t exist at all until the competitive process generates it.
If you are an entrepreneur who tries a new approach, whether you succeed or fail, you have created new knowledge. As a result of the risk you took, people know whether a specific combination of resources, aimed at a set of customers, at a particular price point is viable. This knowledge didn’t exist somewhere in the ether waiting to be discovered by a more powerful algorithm. Instead, it was brought into existence by speculation in conditions of genuine uncertainty. The firm that succeeds reveals the value of its approach retroactively.
The price signals that result are not the transmissions of pre-existing data, but the outputs of a distinct process. The indeterminacy runs deeper still: economic agents are not carrying fixed utility functions. Participation in exchange changes the participants – human interactions themselves generate new kinds of choices. The system a planner would need to model doesn’t hold still, because the process of market coordination is partly constitutive of the preferences and knowledge it produces.
Akio Morita, the co-founder of Sony, launched the Walkman in 1979 against the objections of the rest of the company. Morita had observed that people took large stereos to the beach and listened to music in their cars, and sensed a market opportunity. Sony’s own market research and consumer surveys consistently suggested that there was no consumer demand for a tape player that couldn’t record, no matter how portable it was.
What Morita did was not pattern recognition on a richer dataset. He changed the conceptual space, reconceiving what a music device could be and who it could be for. No amount of data about what consumers said they wanted would have produced the Walkman, because the preference for it was partly a consequence of the product’s existence. The market revealed his conjecture to have value, and it now retrospectively seems obvious. But obviousness after the fact is the signature of knowledge that could only have been created through action.
The launch of the Walkman then had a series of downstream consequences – for competitors, component manufacturers, and music sales alike. The relationship between music and everyday life changed, with enduring social and economic consequences.
If it ain’t broke
Even if many of the foundational technological challenges could be solved by a future system, we would also need to believe that it would be worth the risk. An entrepreneur who bets wrong loses his own capital. A society that dismantles its price system has made an irreversible collective wager. We would need to have confidence that this future model would outperform markets – an order that, given basic institutions, no one has to design.
For the traditional Marxist, the case is straightforward. If technology can finally solve the calculation problem, it vindicates the claim that capitalism contains the seeds of its own succession. But as we saw earlier, the AI alternative to traditional markets increasingly appeals to those who don’t hope for the final triumph of the proletariat.
Oliver Klingefjord and Joe Edelman from the Meaning Alignment Institute argue that advanced AI systems could correct a number of shortcomings in current markets. They believe that markets systematically contract on proxy metrics like hours, subscriptions, engagement, rather than outcomes actually delivered. This is partly because doing so would be prohibitively expensive across millions of consumers, but also because there is an asymmetry of power between suppliers and consumers. Big suppliers can write “take it or leave it” contracts and have huge information advantages versus those that they are contracting with.
Klingefjord and Edelman argue that replacing markets with AI could collapse these measurement and bargaining costs, making it feasible to pay suppliers for delivered benefit via competitive, voluntary AI intermediaries that pool consumers, assess outcomes qualitatively, and negotiate enterprise-level deals.
This is much more sophisticated than technosocialism, but runs into some of the same problems.
This approach tries to maintain a price system, but changes what the prices track. But this entire system relies on an intermediary being able to assess whether a good outcome was delivered. Unlike market prices, which emerge from entrepreneurial bids under genuine uncertainty and personal risks, these assessed prices would reflect a system’s operationalized definition of human benefit. While the user might set the guardrails, the system has to turn these into assessable criteria. In essence, it’s a system’s approximation of a person’s approximation of what constitutes human flourishing.
This would also result in an enormous amount of discretionary judgment embedded in an infrastructure layer that most people would never inspect. While you could try to mitigate this by having a world of competing AI intermediaries, it’s hard to see users choosing between rival theories of their own good – as operationalized by AI systems that they can’t design – as an obvious improvement on choosing between rival products in a traditional market.
Magical thinking
It is, of course, possible to argue that these objections could all be overcome. Maybe we will build systems that can collect all dispersed and local knowledge, model genuine alertness, simulate exchange, and anticipate the outputs of the price discovery process without running it. It may then lead to more efficient resource allocation. But we haven’t built these systems, and nothing in the current trajectory suggests we are close. The burden of proof lies with those who believe otherwise.
The case for planning, by necessity, assumes away all real-world constraints while simultaneously reversing the burden of proof. In response to arguments about the importance of markets, it hypothesizes a system that by stipulation overcomes any individual objection and then challenges opponents to prove that it’s impossible.
Thought experiments that grant one or two premises can be genuinely useful when thinking about advanced technology, but with each additional hypothetical, the value diminishes. Any theory of how the world could work becomes plausible once you assume the existence of an omniscient machine god.
In the end, the CIA didn’t need to worry about Soviet cybernetics. Glushkov’s proposed cybernetics system, budgeted to cost the equivalent of over one trillion dollars in today’s money, never saw the light of day. The Soviet authorities, unconvinced by his argument that the system would pay for itself several times over, balked at the price tag. Cybernetics would not get to play its role in the inevitable triumph of the proletariat.
It seemed that even the Soviets had lost faith in the planned economy. On this, if little else, they were right, even if it was not for reasons that they fully understood. The fundamental obstacle was never processing power or data collection. It was that the economy a planner would need to model is constitutively shaped by the expectations and interpretive frameworks of the people who participate in it. Those frameworks shift in response to the very act of observation and intervention. There is no fixed economy waiting to be measured.
The system that a planner would need to model is the same system that the plan would destroy.



Great piece, and it reminds me throughout of Seeing Like a State - which I think is actually the more relevant frame than Hayek here. I think the argument is correct but setting too high a bar. Infinite central planning may be impossible but there is a lot of room for 10xing current central planning.
A state equipped with vastly superior legibility tools, machine vision, sensor networks and language models could attempt sophisticated planning in domains that matter enormously for how communities develop and govern themselves and cause significant damage in the process
Goodness Hayek is having a day! I just posted this yesterday and yes. https://hollisrobbinsanecdotal.substack.com/p/the-great-syllabus-stagnation