Faster Horses
Intelligence flows from systems and singletons
“If I asked people what they wanted, they would have said faster horses.” The idiom, a widely circulated but likely apocryphal line attributed to Henry Ford, stresses the distance between our ability to picture the future and our ability to make it real. It reminds us that technologies loosen the constraints that shaped past expectations, that deeper shifts usually enact variations in kind as well as magnitude.
“Faster horses” is a shorthand for folk logic that seems bulletproof at the time but quaint in hindsight. Television as radio with pictures, film as photographed theater, early mobile phones as portable landlines, and the internet as a digital library were all kinds of faster horses. They tell us that big swings don’t often play well with existing categories, and that new language, heuristics, or classifications are often needed to make sense of them.
Today, many of those wondering about the downstream impact of thinking machines are on the lookout for AI that can function as a “remote drop-in worker.” This refers to a system that replaces a human employee, in essence, by doing roughly the same things under the same conditions. Here, the future appears as a more seamless version of the present rather than something that dramatically changes the shape of work.
The idea flows from the observation that the majority of jobs in the information economy revolve around making computers do what we want. Word processing, desk research, data analysis, creating presentations, running marketing campaigns and many other tasks are all the end product of keyboard strokes and cursor movements. This is why some long-time AI watchers reckon Claude Opus 4.5, especially its instantiation within Claude Code, can reasonably be described as an early realization of Artificial General Intelligence (AGI). The same might eventually be true of humanoid robots, especially given they can slot into existing infrastructure without costly redesign, but our focus here is solely on knowledge work.
As others have pointed out, the response to a common AGI litmus test (a system that can outperform humans in most economically valuable work) turns on what we categorize as “economically valuable work.” If we define that as “stuff done on a computer,” then it’s plausible that one day soon the models will cross that threshold (if Claude 4.5 Opus hasn’t already). And if a model can be said to be generally capable, then the remote drop-in worker shouldn’t be too far behind.
Whether a single model can do a job in isolation is a useful question to ask, but it doesn’t tell us much about how such systems, interacting with many people and agents of their own, might rearrange patterns of coordination and the shared assumptions that guide them. In some ways, the conservative bet is that the drop-in worker is a stronger account of our present than it is our future. Technologies that matter rarely honor the roles we assign them. If the future is anything like the past, the drop-in worker may prove to be a faster horse: a story that made sense before the true nature of the agent economy became visible.
The Wisdom of the Crowd
Traditional accounts of AGI development often describe the emergence of an isolated system capable of completing the vast majority of cognitive tasks, sometimes referred to as a “singleton”. An alternative scenario, now seriously considered by AI developers, imagines that capabilities may be manifested through the coordination of “sub-AGI individual agents” with complementary skills and affordances. This scenario concerns an ecology of semi-specialized agents whose combined behavior outstrips anything they could do alone (and tallies up to something that we could describe as AGI at a high enough level of abstraction).
You might have a code agent that builds, a negotiation agent that handles scheduling or purchasing, and a compliance agent that checks your work. On top of these sits a manager that breaks goals into subtasks and shunts each to the right agent for the job. We state an objective, the system spins up a network of agents, they pass data between them, and a synthesis function presents the output of the collective for review.
Imagine launching a new software feature. The drop-in worker functions like a high-speed freelancer insofar as it writes code, pauses to check for bugs, and writes documentation sequentially within a single stream. It is a linear acceleration of a human workflow. The agent ecology, however, behaves more like a stack of mini-organizations. When the objective is stated, an “architect agent” drafts the structure while a “red team agent” simultaneously attacks that design to find security flaws before a line of code is written. A “compliance agent” cross-references regional data laws in the background. These agents operate in parallel to create an adversarial loop where the output is the sum of many small interactions. The result is an ecosystem capable of the kind of concurrent processing that individual minds, biological or synthetic, may struggle to achieve by themselves.
But this is only a partial picture. The share of human-agent and agent-agent interactions in the economy will increase over time, with agents engaging in price negotiation, placing orders from one another, coordinating supply and demand, and even rating each other to assign trustworthiness scores.
In some ways, the patchwork AGI thesis is another episode in a long-running story about how intelligence behaves at scale. Markets outperform planners because knowledge never exists in concentrated or integrated forms, but as incomplete and contradictory perspectives dispersed across individuals. Hayek reminds us that “planning” happens all over the place through individual agents, which is why he distinguishes it from “economic planning” that deals with state-backed forms of enterprise management. The agent economy doesn’t represent a toss-up between planning or ad-hoc action but rather an older question about whether planning ought to emanate from within or from without.
Aristotle raised the question that still haunts proponents of collective intelligence: can the many, combining their partial virtues, outperform the excellent few? In Book Three of Politics, he writes:
“For it is possible that the many, though not individually good men, yet when they come together may be better, not individually but collectively, than those who are so, just as public dinners to which many contribute are better than those supplied at one man’s cost; for where there are many, each individual, it may be argued, has some portion of virtue and wisdom, and when they have come together, just as the multitude becomes a single man with many feet and many hands and many senses, so also it becomes one personality as regards the moral and intellectual faculties. This is why the general public is a better judge of the works of music and those of the poets, because different men can judge a different part of the performance, and all of them all of it.”
For Aristotle, groups become smarter when they successfully combine different aspects of competence into a single body. Consider a jury that sees people with different experiences and biases pool their judgment to reach a fairer conclusion than any juror might in isolation. Or England’s common law, where centuries of small decisions by judges produce a legal order with more adaptability than one made by decree.
The same is true of Wikipedia, peer review, or nimble companies. In each case, the quality of the outcome rests on a kind of distributed deliberation wherein perspectives clash, revise, correct, and eventually settle into a stable state. It echoes the Athenian assembly and the medieval disputatio, both of which treated the good we call judgment as the product of structured disagreement.
American writer Howard Rheingold coined the term “smart mobs” to describe groups of people who are able to organize and coordinate quickly through the use of mobile communication technologies like the internet or mobile phones. The term “mob” is deliberately ambivalent, a framing he uses because of its darker connotations (he explicitly notes mob mentality can be for good or ill).
Rheingold thought smart mobs worked because low-cost communication let individuals share context and act in concert without central control. These groups represented an idealized version of accelerated coordination built from minuscule signals that could be aggregated over huge numbers of agents. The mob framing reminds us that coordination capacity increases faster than deliberative capacity, and as a result, the key variable becomes governance of the communications substrate.
But mobs aren’t smart by default. There are a whole set of coordination problems that flow from distributed decision-making, from free-riding (where individuals benefit from a group’s effort without contributing to it) to information cascades (where people copy others’ choices even when their own judgment points elsewhere). Many of us recognize some of these problems when we spend too much time on social media. We see outrage spread through networks faster than facts, and know how easily a crowd can be steered by sentiment rather than the hard work of judgment.
We could say that groups become dumb when they fail to properly synthesize knowledge, and they become smart when divergence is preserved and integrated. Whether or not we benefit from the wisdom of the crowd often depends on the structures that keep the mob in check. Markets do this with prices and labs with peer review. Rheingold might say that smart mobs emerge when communication structure and incentives reward decentralized coordination rather than herd behavior.
Society of Mind
In the 1980s, the AI researcher Marvin Minsky wrote about what he called the “society of mind.” What he meant was that unified intelligence is a loose federation of smaller processes, each narrow, each fallible, yet together capable of producing something that looks like coherent thought given enough altitude. For Minsky, intelligence emerges from many mindless “agents” coordinated in special ways, with the mind employing something like a computational and explanatory strategy whose power is a product of messiness, cross-connection, coordination, and resolution.
Today’s AI models demonstrate unified intelligence at two levels: as a byproduct of statistical learning and in the way models are housed within larger constellations that we refer to as “systems.” Transformers likely generalize because they compress huge corpora into representations that let them improvise solutions on the fly. Intelligence is in some sense a property of compression plus scale, an analog of Aristotle’s crowd insofar as it concerns many partial signals integrated into a single effective whole. As for the systemization of models, we can view each as a constellation of individual expert functions like tooling or multi-modal functionality.
But why stop there? If we accept that AI, like all intelligence, benefits from the interactions between discrete units, it follows that its capability should also be treated as a property of a larger constellation in which many systems operate together. Variance creates productive tension as models surface alternative interpretations and explore distinct solution paths. When those paths are combined through orchestration layers, tool use, or various other kinds of agent frameworks, the result is a system that searches the problem space more effectively than a lone model. Once multi-model systems interact with one another — coordinating, passing intermediate results, or checking each other’s claims — a kind of higher-order intelligence bubbles up from the sum of interactions across several layers. More powerful models are great, but superior ecologies are better.
People each hold fragments of truth, most of it tacit and hard to articulate, which is why spontaneous order tends to get the better of best laid plans. Polanyi might remind us that a drop-in worker presumes that competence is formalizable into explicit tasks and checklists. His work tells us that competence lives partly in the realm of tacit knowledge, that “dropping in” workers will face the same context problems faced by central planners the world over.
Of course, recent progress in AI development does typically try to provide models with the context they need to work effectively (specifically through the use of reinforcement learning techniques to make models good at human work in human settings). We might also say that, even if multi-agent systems matter internally for model cohesion, their deployment could still take the form of a remote-worker analog. Things like permissions, accountability, compliance, budgeting, and change-management favor inserting agents into existing workflows as opposed to redesigning them from the ground up.
These objections are useful, but they don’t allow us to skirt the core problem with the “remote drop-in worker” metaphor: that it treats intelligence as solely a property of individuals. It presumes the unit of analysis is the solitary agent carrying out tasks one after another, when everything we know about complex work suggests otherwise. Real capability comes from the knots of relationships, feedback loops, constraints, and opportunities that bind us together. First within models, then as agent systems, then eventually as agent-agent systems. Collective intelligence is prefigured by how information moves through a system and how the residue of experience accumulates across many small decisions made by each of us.
The remote drop-in worker may prove to be a transitory moment at best and a category error at worst, one that treats AI as an incremental addition to familiar workflows rather than a force that will reshape the nature of those workflows. That is “faster horses” thinking. We’re projecting today’s limitations onto tomorrow’s world and overlooking the fact that new capabilities alter the constraints that make what happens today seem natural. More accurate accounts tend to lead somewhere else, in a form we often only recognize with the benefit of hindsight.
Cosmos Institute is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund fast prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.



great read. I see past the horse i see the car. I see even further that that! I can see no more starvation, No more greed and hate. I see a world where we stop wasting everything do to greed and use it to make the world great. Not part of it ALL OF IT!