On the Noble Uses of AI
When Cognitive Offloading Elevates Us
This is a guest post by Kevin Vallier, a Professor of Philosophy at the University of Toledo and Director of Research at the Institute of American Constitutional Thought and Leadership.
Pepper was once a luxury. Medieval merchants traveled halfway around the world to bring it to European tables. Now it sits in shakers at every restaurant, free for the taking. It became cheap because people wanted to put it on everything.
Intelligence is becoming pepper.
As Sam Altman puts it, intelligence “too cheap to meter.” Open-source models already make raw cognitive power free at the point of use. Soon it will be everywhere, layered over every interaction, and available for every task. Our operative intelligence, the combination of biological and artificial neural networks, will expand dramatically. But as artificial intelligence becomes cheap, biological intelligence may decline. A society can grow smarter while its members’ skills atrophy.
Most people find this prospect irredeemably bad. I think some atrophy is acceptable and sometimes beneficial. This thesis is surprisingly moderate. We already accept strategic cognitive atrophy when the trade-offs are right. Calculators have led to the atrophy of mental arithmetic capacity, yet mathematicians today are better at mathematics than ever. They gave up tedium and they gained something greater: the ability to solve harder problems.
Mathematicians now routinely use computer algebra systems (such as Mathematica) and programming languages (such as Lean) to support mathematical reasoning. This lets them explore problems they could not before. The four-color theorem was proven in 1976 with computer assistance, as no human could have checked all the cases themselves. This pattern is ancient. Writing hurts our memory, but no one wants to stop people from writing. This precedent shows that cognitive offloading can enhance human flourishing when properly structured.
AI will cause some cognitive atrophy. Offloading is inevitable when intelligence becomes free. We’ll need to decide which forms of atrophy we should accept given their inevitability.
Trade-offs
John Stuart Mill and Aristotle help clarify when these trade-offs prove acceptable. Mill distinguished between higher and lower-order pleasures. By higher pleasures he wasn’t talking about “refined” tastes or elite entertainment. He meant the pleasures that engage our distinctively human capacities: reasoning, imagination, and moral feeling. As Mill put it in Utilitarianism, we “assign to the pleasures of the intellect, of the feelings and imagination, and of the moral sentiments, a much higher value as pleasures than to those of mere sensation.”
Consider someone who has experienced the satisfaction of a nice-tasting meal and the satisfaction of solving a difficult problem. If she is honest, she will rank the second higher. These provide a deeper satisfaction.
This is Mill’s “competent judges” test. Ask anyone who has genuinely experienced both kinds of pleasure which they would give up. They will sacrifice bodily pleasures first. The fool may be content. But Socrates, even when dissatisfied, has something the fool will never have.
Aristotle made a similar observation over 2,000 years prior. He distinguished three lives: the life of pleasure, the life of political action, and the life of contemplation. The life of pleasure seeks bodily satisfaction. The life of action seeks honor and achievement. But the contemplative life seeks truth itself.
Contemplation is not passive. It is the most intense activity the mind can perform. It means thinking about the highest things: the structure of reality, the nature of the good, and the order of the cosmos. This activity is not labor-intensive like farming or manufacturing, and it’s not leisure either. It is the mind at full stretch.
Aristotle thought most people could not sustain a contemplative life. The demands of survival intervened. But he also thought that insofar as we can contemplate, we should. It is what we are for.
This is where AI’s promise shines through. It can handle lower-order cognitive tasks, freeing us for activities that engage our highest capacities. A researcher who once spent hours hunting for sources can now spend those hours thinking about what the sources mean. A writer who labored over formatting can focus on whether the argument is true.
Of course, people may not do this. We all know that AI can seduce us into passivity. But I would argue that it also elevates the opportunities for virtue. The mind freed from drudgery really can rise. It can shift us from the mechanical to the meaningful. If we follow the competent judges test, as Mill would have us, we know that most people do not prefer the mechanical once they have moved beyond it. Many people really do have the time to cultivate virtue and devote themselves to pursuits that satisfy them and that achieve their highest values. AI gives us the chance to do that.
Following Mill and Aristotle, cognitive offloading is acceptable only when it preserves these capacities. It can become noble when it enhances them.
What to Offload
Consider two patients facing the same diagnosis. The first asks an AI what treatment to pursue and then simply does it. She has outsourced both the information-gathering and the judgment itself. Her capacity for medical reasoning atrophies because she never exercises it. She cannot evaluate what the AI told her. She cannot ask her doctor the right questions. When the AI errs, she has no way to catch its mistake.
The second patient uses AI differently. She conducts an extensive medical review, reading research reports that the AI surfaces. She generates a list of questions for her doctor based on what she learned, brings the reports to the appointment, and discusses them. The doctor makes the call, but with a better-informed patient in the room.
Both patients offloaded cognition. But the second preserved her deliberative capacity. She did not hunt for the research herself (there’s no free cognitive lunch), but she did something harder: evaluate it. Her speed at reading dense medical literature may atrophy. But her ability to weigh trade-offs, to question authority, and to integrate information grows stronger.
Offloading exists on a spectrum. The same pattern emerges in education, but with deliberation more fully intact. A student can have AI do her homework and then pass it off as her own. She speaks fluently about concepts she doesn’t understand. The appearance of intelligence replaces its reality. Like muscles that atrophy without use, her reasoning capacity withers.
Or she can use AI to challenge herself. Study mode forces her to work through problems rather than receiving answers. The AI asks questions, corrects misconceptions, and refuses to simply hand over solutions. Each session builds her capacity. She’s still using AI. But she’s building human thinking on top of AI thinking.
Autopilot shows what happens when atrophy goes wrong. Modern pilots offload constantly, and for good reason; automated systems handle airspeed, altitude, and navigation almost all of the time. The pilot watches the machines work, which reduces fatigue. Flights have become far safer as a result. Still, there is a loss. As we know from the movies, hand-flying an aircraft keeps pilots in the loop, letting them feel the aircraft’s responses intimately. That helps create a continuous mental model of where the plane is located and what comes next. Autopilot allows for drift, and the mental model fades.
Air France 447 is illustrative. In 2009, over the mid-Atlantic, the aircraft’s pitot tubes iced over, and the autopilot disconnected. The pilots had to hand-fly in unusual conditions and made simple mistakes. The aircraft stalled, but the pilot pulled the nose up. The flight had 228 people. None survived.
The pilots had thousands of hours of flying time between them, but their critical skills atrophied from disuse. Automation makes us safer while making rare emergencies devastating. The mathematician who offloads arithmetic still knows how to solve hard problems. The pilots who offloaded flying could not fly when it mattered.
What distinguishes good offloading from bad? The examples suggest an answer. Offload mechanical cognition: the tedious, repetitive operations that don’t require judgment. Preserve core deliberative capacities: the ability to evaluate, choose, and reason through hard cases. Make sure evaluating AI outputs requires active engagement. Expand higher-level intellectual activity. And preserve cognitive sovereignty: maintain the ability to contest what the AI tells you, to understand how it reached its conclusions, and to exit when you need to.
Permissible atrophy
In my previous essay for Cosmos, I argued that intelligence environments require contestability, transparency, and exit rights. Permissible atrophy is the next question, and it is not just a personal one.
We are designing intelligence environments for one another. A teacher who assigns AI tools shapes what her students become. An employer who deploys AI assistants shapes what their workers can do. These choices become the formal and informal rules that govern how we think.
We have freed people from drudgery before. Some wasted the freedom. Others turned toward higher things. AI can go either way. Build it wrong, and we may create sophisticated parrots: fluent, confident, and hollow.
Pepper became cheap because people wanted to put it on everything. Intelligence is becoming cheap for the same reason. How we embed it will determine who we become.
Cosmos Institute is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund fast prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.



Very interesting argument. I'm grateful for the work of Cosmos Institute - thank you for elevating the kind of contemplation we need
As a (lapsed) pilot and flight instructor, the autopilot example is quire relevant. As the mom of a teen who is aspiring to become a professional pilot, it also makes me wonder: when is the human touch even in emergencies no longer needed?
As long as AI is fallible and emergencies can happen (the iced-over pitot tube) I'd never want to be a passenger in a totally AI-flown plane. Nor do I want to be a passenger in a plane that has auto-pilot dumbed down pilots--and I know from experience how easy it is to be complacent in the cockpit when everything goes well.
For me, when I was flying, the anti-dote was the pure personal joy that comes from melding my mind and body with this delicate machine. Hand-flying an instrument approach in actual conditions to the same standards as the autopilot does it is hard, and very satisfying in the moment (independent of the fact that I know it helps me fly the plane safely if the autopilot ever gives out).
I wonder if the antidote to the atrophy of thinking skills in the age of AI needs to start at a similar point--the joy of truly understanding something, of truly being clear, of making decisions that are well-informed with your own mind. Unfortunately, this is something our young people just don't experience in today's education system--and without experiencing it, how will they ever value it the way a competent pilot values the challenge of perfection in hand-flying?
And then: at some point, will the AI and automation be so good that a human's ability to fly in emergencies doesn't matter at all anymore? When will pilot jobs become obsolete? Probably sooner than we expect for remote cargo where no human lives are at risk, and maybe not for a long time because of our human-focused biases with passenger flight. (It's very different than driving: with a Waymo, if the machine breaks down, it just pulls over and you get out. With a plane, if the machine breaks down, there is no human back-up, and you're in the air, you're dead.)