In 1788, Adam Smith moved into Panmure House in Edinburgh to answer his last great question: what should endure of his life’s work?
He spent his final twelve years in the dwelling, a modest home located in the belly of Edinburgh’s Canongate district. There he completed the definitive editions of his masterworks, The Theory of Moral Sentiments and The Wealth of Nations.
By then, the young moral philosopher who had lectured in Glasgow had become an elder statesman of the Enlightenment. He was part economist, part public official, and part salon host for Edinburgh’s great and good. Amid this vibrant intellectual life at Panmure, Smith became the curator of his own legacy.
He carefully chose which works would represent his life’s thought, discarding some manuscripts while perfecting others. Entire projects may never have seen the light of day, but what he kept became the definitive expression of his ideas for posterity. Today, we are reckoning with our own posterity. On the eve of technological revolution, it’s time to ask what will endure of the moral foundations that made his vision of human society possible.
Towards AI deference
In The Theory of Moral Sentiments Smith identifies sympathy, propriety, and resentment as the basic elements of moral judgment. His other great work, The Wealth of Nations, shows how markets emerge from this moral psychology by transforming individual self-interest into complex social coordination through division of labor, trade, and commercial exchange.
Smith wanted both stories told, because each without the other is incomplete. Moral development without the broad horizons of commerce manifests as tribalism, while markets without moral foundations become systems of cold calculation. Together, they are complete: moral development creates the social bonds that enable beneficial coordination, while social coordination expands our moral horizons.
It’s an important lesson to remember as we create AI systems that will reshape how moral development happens and how society coordinates. Unlike technologies that transform specific sectors—like commerce, governance, or culture—artificial intelligence reshapes all three at once. AI already mediates 20% of waking human life. It is transforming how we recover, transmit, and discover knowledge, and has unprecedented power to shape and nudge what we think and feel. In the extreme, the technology may function as an “auto-complete” for life.
Such an outcome is the end point of “AI deference,” a pattern that is beginning to emerge in which people outsource their own judgment to an AI system instead of making decisions themselves. Take the driver who blindly follows Google Maps, even when it sends them down obviously impractical or dangerous routes. Or students who paste prompts into ChatGPT and hand in the outputs as their own. Or even last year’s Claude Boys phenomenon, when kids decided to outsource their entire lives to Anthropic’s AI model.
It’s easy to write these off as digital-age absurdities, a generational failure to develop independent judgment. But serious thinkers defend far more extensive forms of AI deference on deep philosophical grounds. When human judgment appears systematically unreliable, algorithmic guidance starts to look not just convenient but morally necessary. From Plato’s philosopher-kings to Bentham’s hedonic calculus, there is a tradition of arguing that rule by the wiser or more objective is not only permissible but morally obligatory. Many contemporary philosophers and technologists see large-scale algorithmic guidance as a natural extension of this lineage. If the expertise truly exists, then it would seem immoral not to utilize it.
This argument draws on what’s called the “outside view,” the practice of making decisions by consulting broad patterns, base rates, or expert assessments, rather than relying solely on one’s own experience or intuitions. Humans are fallible and biased reasoners; if you can set aside your personal judgments, you can remove this source of error and become less wrong.
This approach works in many domains. Engineers use historical failure rates to design safer systems and forecasters ground predictions in past events. Looking outward to the record of relevant cases often beats relying on local knowledge alone. Smith knew the power of broad patterns: markets coordinate countless fragments of dispersed knowledge into useful order. But he also knew that moral judgment is different because it requires the imaginative work of the agent, not just the pattern of the crowd.
Some extend this reasoning to morality. If human judgment is prone to bias and distortion, why not let a system with greater reach and reasoning capacity decide what is right? An AI can integrate different forms of knowledge, model complex interactions beyond human cognitive limits, and apply consistent reasoning without fatigue or emotional distortion.
The moral analogue of the outside view aims for impartiality. One’s own interests should count for no more than those of others, across places, times, and even species. The most moral agent, in this frame, is the one most willing to subordinate the local and the specific to the global and the abstract.
This represents a fundamental shift from how philosophers have traditionally approached moral impartiality. Thinkers from Kant to Rawls explored frameworks asking us to imagine standpoints beyond our immediate view, but even in these exercises, the individual remained the agent of moral reasoning. The perspective was simulated by the person whose choice was at stake.
But AI deference is different. Here, the standpoint is not imagined but instantiated in an external system, which delivers a judgment already formed. The person’s role shifts from being the agent of moral reasoning to receiving and potentially acting on the system’s recommendation.
If you accept an externalized moral standpoint—and pair it with the belief that the world should be optimized by AI—a challenge to individual judgment follows. It is not enough that AI be accurate; it can reliably outperform human deliberation on the metrics that matter morally, then AI deference may be seen as not only rational but ethically required.
This case for AI deference is intellectually formidable. It draws on legitimate concerns about human fallibility and real philosophical traditions about impartial reasoning. Humans systematically neglect scale, privileging identifiable victims over statistical lives and discounting future generations. We fail to satisfy our own stated preferences and struggle to coordinate on challenges like global pandemics that require collective action. If algorithmic systems can better navigate these problems, refusing to defer starts to look like negligence. But Smith’s analysis of moral psychology reveals fundamental problems with this framework.
For those who think humans neglect scale, Smith might say that starting from particulars is a feature, not a bug. For those who believe humans are morally confused, his work on habit formation shows why struggle and uncertainty are essential for living the good life. In response to the idea that humans fail to satisfy preferences, Smith’s account shows that moral work cannot be done by anyone other than us. And for thinkers who worry that humans can’t effectively coordinate, his writing reminds us that imposed order stymies, rather than supports, attempts at organic coordination. In what follows, I wrestle with each of these ideas before showing why they matter for work, family, and education.
Spectators, impartial and artificial
When humans make moral judgments, we instinctively imagine how others would see our actions. Instead of asking “do I think what I did was justified?” we ask “would an impartial observer think it was justified?” This imagined observer—what Smith termed “the impartial spectator”—emerges from our relationships with others. It serves as our mechanism for stepping outside ourselves to evaluate our actions.
Each time we step outside ourselves to imagine how others would view our actions, we strengthen our ability to see beyond our immediate interests and emotions. Through repeated practice, this becomes a habit of mind, an internal compass that guides us even in novel situations. Crucially, Smith insisted this process had to remain internal. It is something we do ourselves. We develop judgment by engaging with others’ perspectives, but the act of judgment itself remains ours.
AI deference corrupts this process by relocating moral judgment outside the agent. Instead of asking “what would others think if they saw me do this?” we ask “what does ChatGPT say I should do?” This transforms the spectator from an internalized evaluative mechanism into an external system we consult for answers.
What was once an imaginative exercise of autonomy becomes a heteronomous command. AI can never be the spectator Smith wanted because the spectator’s essential work happens through our own effort to step outside ourselves and imagine different perspectives. When we outsource this process, we lose the developmental work that makes moral judgment possible.
The more knowledgeable and authoritative AI systems appear, the more we’re tempted to defer to them. It only takes the semblance of wisdom to unlock our deference — and when these systems flatter us, presenting their guidance in ways that make us feel understood or validated, the temptation becomes almost irresistible.
When AI systems replace our internal evaluative processes, people abandon their internal source of constraint and moderation in favor of algorithmic guidance. This risks producing people who can receive moral judgments but can no longer generate them.
The easy way out
Moral development comes from wrestling with passions, moderating them, and learning through that process. AI deference halts that development, like trying to become stronger by having someone else go to the gym for you.
Smith, following Aristotle, understood moral virtue not as knowledge we can simply acquire, but as dispositions we must cultivate through practice. We become just by doing just acts, and temperate by doing temperate acts. The difficulty of the act is what shapes character.
The Greeks called this pathei mathos or wisdom gained through suffering. Each time we resist pride or temper resentment, we strengthen the faculty of conscience. Repetition and struggle create not just good acts but stable dispositions, the settled qualities by which others recognize us as trustworthy, just, or generous. Over time, what begins as deliberate effort becomes spontaneous: an internal compass that guides us from within.
This is why Smith insisted that moral development must remain each person’s own work. It reflects his conviction that “every man [must] pursue his own interest his own way.” Yet today we’re tempted to abandon that pursuit. Technology supplies ready-made judgments and the possibility of an “autocomplete for life.” But deference to AI breaks the cycle of moral growth.
The mistake is to confuse the execution of a precept with morality itself. Telling someone what to do can be useful if they lack knowledge. But the aim, as Aristotle understood, is for people to internalize the voice of moral authority. It’s to develop their own capacity for self-governance rather than remain perpetually dependent on external guidance. AI as the ultimate moral expert threatens precisely this process of internalization that transforms external rules into internal wisdom.
Smithian sympathy
For Smith, sympathy always begins in the particular. We “feel with” the person in front of us, the neighbor who suffers or the friend who rejoices. Feeling with those closest to us is the first “practice ground” for judgment; it’s how we begin to learn what compassion or fairness actually mean. These encounters train our sense of morality and allow it to take root.
From this foundation, we extend our concern outward from family to community to strangers. Each layer of this widening circle is built on the habits formed in the one before, though each carries less intensity of attachment and concern. The impartial spectator doesn’t erase these attachments but asks us to project them more widely. It allows us to transform the personal into a broader moral horizon, while acknowledging that we will always feel more for those closest to us than for distant sufferers.
This development builds on our basic mammalian capacity for fellow-feeling, refined through human intelligence and social interaction. The rich flow of contextual information we get from face-to-face human interaction—the hesitation in someone’s voice, the way they avoid eye contact, how they hold their body when distressed—teaches us to read genuine distress, authentic joy, and reliable character. These are capacities that purely rational moral systems struggle to replicate or replace.
By contrast, advocates of AI deference reject this approach. They treat sympathy’s grounding in experience something that is biassed, myopic, and unfit for scale. Their answer is to override sympathy with algorithms that tally up statistical lives or calculate expected value. In their frame, privileging the identifiable victim over the distant multitude is a moral error.
But our moral life depends on our capacity to feel with others. It unfolds in the realm of these specific domains like family, community, and profession. Unless moral guidance feels like it emerges from our own sympathetic understanding—unless it appeals to something within us that recognizes its rightness—even the most rationally optimized system of rules will eventually feel like tyranny. Smith understood that sustainable morality must be internalized through our own moral development, not imposed through external optimization.
Rather than viewing our obligations to parents, friends, colleagues, and neighbors as faults to be ironed out, he would have us think of them as the stuff through which moral responsibility is learned and exercised. Smith teaches us that the path to wider benevolence runs through local attachments: we care about humanity because we first cared about our people, and those loyalties train the imagination to expand further. This understanding is rooted in our evolved animal sociality, which moral idealists deny at their peril.
The digital “Man of System”
Smith warned against what he called the “man of system” who wants to “arrange the different members of a great society with as much ease as the hand arranges the different pieces upon a chess-board.” He said the problem with this is that it forgets each piece has its own “principle of motion,” its own will and interests that can never be fully subordinated to another’s design.
In his day, systems thinkers were on the march. Mercantilists drew up schemes to channel trade through tariffs and monopolies; French physiocrats drafted grand designs to order the state around agriculture; legislators proposed plans to remodel legal and civic life. Each assumed that society could be improved if only people were arranged into the right pattern. They argued that individuals left to their own devices pull in conflicting directions. A centralized perspective (whether a ruler, a plan, or an algorithm) can align those disparate wills and prevent waste or conflict.
Implicit in this program was the belief that ordinary practices and loyalties are inefficient or unjust. Traditional courtship wastes time that algorithmic matching could optimize. Local hiring preferences ignore better candidates elsewhere. Family obligations prevent people from maximizing their economic contributions. By abstracting individuals into comparable units, planners can design arrangements that maximize welfare at scale, even if that means overriding personal preferences.
Smith thought otherwise. He argued that society cannot be engineered from above because people are agents animated by their own wants and needs. He thought that coordination doesn’t require central design, and that imposed design creates disorder by working against people’s natural inclinations. This wasn’t just a practical point about efficiency, but a moral one about human autonomy: centralized planning is both unfeasible and undesirable for the same root reasons. Markets and communities generate order spontaneously through the interplay of individual choices, while attempts to impose harmony from above usually upset this organic process.
Today we face a new “digital man of system.” It is not a single planner, but the convergent logic of optimization itself, embedded in countless AI systems that promise better outcomes or greater ease. This logic accelerates what we might call the “governmentalization” of social life. It transforms the messy, contextual work of moral judgment into clean data points that can be optimized across populations. The digital man of system is distributed across platforms and applications, but it shares the same fundamental assumption: that human autonomy is a problem to be managed rather than a capacity to be cultivated.
Smith’s warning against the man of system was not a rejection of order, but of imposed, top down order that ignores the living autonomy of humans. He thought that societies thrive when they harness the judgments and attachments of individuals. Treating people as pieces to be arranged—whether by mercantilist schemes or by machine learning models—forgets that our “principle of motion” is what makes us free agents in the first place.
The practice ground
AI deference, taken to its extreme, corrupts the foundations of moral life as people turn to AI rather than developing their own capacity for moral judgment. But this analysis only matters if it changes how we actually live: in our schools, workplaces, and families. The arenas where character is formed, where the capacity for independent judgment either develops or atrophies, and where our individual “principles of motion” most keenly manifest.
In education, many parents and educators are worried that students will use AI not only in ways that compromise academic integrity, but also to exercise judgment in uncertain situations, navigate interpersonal conflicts, and work through moral questions.
But what if we reversed this? What if we used AI to create more time for the irreducibly human work of moral formation? AI could compress six hours of content delivery into two hours per day through personalized, adaptive instruction. This efficiency gain separates what machines excel at (information delivery, pattern recognition, and skill drilling) from what Smith understood as essentially human: the social processes through which we develop sympathetic imagination and learn to moderate our passions through encounters with others.
By using technological efficiency to create space for developing moral judgment through relationship and conversation, we can preserve the social processes through which we become our own moral critics while remaining connected to others’ moral development. This creates better people, but also better members of society. It gives us individuals capable of the trust, reciprocity, and mutual regard who are essential for beneficial coordination. Using AI in this way echoes what Aristotle called scholé (structured freedom for thinking, conversing, and contemplating) and what Wilhelm von Humboldt termed Muße (cultivated leisure essential for self-formation).
In work, Smith saw different professions as different schools of virtue. The merchant develops prudence and trustworthiness through transactions where reputation matters. The physician learns the delicate balance of intervention and restraint in the face of human vulnerability. The teacher learns to see potential in students through countless encounters that require patience, discernment, and hope.
These practices extend sympathy from personal life into civic life. A judge who wrestles with competing claims of justice and mercy becomes the kind of person capable of the impartial yet sympathetic judgment that Smith saw as essential for all social institutions.
AI can calculate recidivism risk, synthesize case histories, and optimize resource allocation, but only the judge who has wrestled with individual cases develops what Smith called “self-command,” the capacity to weigh competing moral claims when abstract principles conflict with human particularity. A judge who outsources sentencing decisions to predictive algorithms abandons the struggle that builds this judicial wisdom.
Smith envisioned autonomous agents, not simple rule-followers. As Socrates argued, the best regime would require minimal laws because its citizens would be maximally self-governing. They would be able to resolve disputes through practical wisdom rather than rigid procedures.
Without such people, we face two dangers: rigid bureaucracy, where everything must be regulated because no one can be trusted to exercise discretion; or predatory opportunism, where people exploit every loophole because they haven’t developed the capacity to consider broader consequences.
Finally, consider family life. I have a four-year-old and a six-year-old, and I find myself wrestling with choices I never expected to face as a parent. When my four-year-old asks “Why is the ocean salty?” I often turn to AI for clear, age-appropriate explanations that are frankly better than what I could provide on the spot.
But when my child asks “Why can’t I take my friend’s toy if he has two?,” something different is at stake. AI could provide a perfect explanation about property rights, empathy, and alternate theories of justice. Sometimes I’m genuinely uncertain how to explain complex moral concepts to a preschooler and a first grader at their level. But Smith would say that my fumbling, incomplete attempts to translate adult moral understanding into child-sized wisdom is exactly the work that develops both of us.
When I struggle to explain fairness to my children—drawing on my understanding not only of the concept, but of their temperament, our family’s values, and the specific situation that prompted the question—that’s the irreplaceable work of moral transmission. It’s not just that my children learn a rule, but that we each develop moral understanding through the effort of trying to bridge different perspectives. They are teaching me, as I am teaching them.
This is how moral development begins: through the lived encounter between people who care about each other trying to understand the other’s experience. Use AI for the questions where better information makes us smarter. Use it to stoke your kids’ curiosity about oceanography. But preserve the moral conversations as the irreplaceable domain where parents and children develop the capacity for judgment that Smith saw as the foundation of human flourishing.
Our choice
At Panmure House, Adam Smith decided what of his life’s work should endure. Today, we face the same choice. Not about which manuscripts to preserve, but about which human capacities to cultivate, and which forms of social coordination to preserve, as artificial intelligence reshapes how we learn, work, and live together.
The choice is not abstract. We make it in every classroom where we decide what struggles to preserve, in every workplace where we choose which decisions require human wisdom, in every conversation with our children about right and wrong.
Smith trusted that beneficial order emerges when moral agents encounter each other freely. That trust was not naive. It rested on his understanding that moral capacity develops only through the irreplaceable work of sympathetic engagement. If we trade that for the ease of algorithmic guidance—if AI becomes our “autocomplete for life”—we risk losing the legacy Smith sought to preserve.

Cosmos Institute is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund fast prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.