As a (lapsed) pilot and flight instructor, the autopilot example is quire relevant. As the mom of a teen who is aspiring to become a professional pilot, it also makes me wonder: when is the human touch even in emergencies no longer needed?
As long as AI is fallible and emergencies can happen (the iced-over pitot tube) I'd never want to be a passenger in a totally AI-flown plane. Nor do I want to be a passenger in a plane that has auto-pilot dumbed down pilots--and I know from experience how easy it is to be complacent in the cockpit when everything goes well.
For me, when I was flying, the anti-dote was the pure personal joy that comes from melding my mind and body with this delicate machine. Hand-flying an instrument approach in actual conditions to the same standards as the autopilot does it is hard, and very satisfying in the moment (independent of the fact that I know it helps me fly the plane safely if the autopilot ever gives out).
I wonder if the antidote to the atrophy of thinking skills in the age of AI needs to start at a similar point--the joy of truly understanding something, of truly being clear, of making decisions that are well-informed with your own mind. Unfortunately, this is something our young people just don't experience in today's education system--and without experiencing it, how will they ever value it the way a competent pilot values the challenge of perfection in hand-flying?
And then: at some point, will the AI and automation be so good that a human's ability to fly in emergencies doesn't matter at all anymore? When will pilot jobs become obsolete? Probably sooner than we expect for remote cargo where no human lives are at risk, and maybe not for a long time because of our human-focused biases with passenger flight. (It's very different than driving: with a Waymo, if the machine breaks down, it just pulls over and you get out. With a plane, if the machine breaks down, there is no human back-up, and you're in the air, you're dead.)
Nice essay, particularly the invocation of Aristotle. It’s particularly fitting given the history of Greek philosophy’s transition from verbal to written memory.
What is most important here is delineating what uniquely human faculties are. In your medical example, the more conscientious actor (not explicitly but likely) felt some sort of moral agency and accountability over the eventual result. If we forecast out the increase in intelligence tasks AI is likely able to perform, more and more of that delineation will be moral decision-making.
@Cosmos Institute and Kevin Vallier, the Mill/Aristotle frame is the right starting point. But the "offload the mechanical, preserve the deliberative" distinction is too clean.
Deliberation isn't just cognitive. It's somatic. The capacity to evaluate, to tolerate uncertainty, to stay present with difficulty long enough to actually think requires a nervous system settled enough to do the slow, effortful work. That capacity isn't preserved automatically when you offload the tedium. It's built through duration.
The Air France 447 example makes this point better than the essay acknowledges. Those pilots didn't just lose a skill. They lost the embodied capacity to respond under pressure. That's not cognitive atrophy. It's somatic atrophy.
The deeper problem: "cognitive sovereignty" assumes we can contest AI outputs. But contesting requires staying with information long enough to evaluate it. That capacity is being eroded not by offloading itself, but by the architecture of constant interruption that surrounds it. Platforms profit from keeping loops open. Completion ends the session.
You can't choose contemplation from a body that can't settle. The question isn't just what to offload. It's whether the systems we're building leave us with the capacity to deliberate at all.
I explored these themes in You Are Not Distracted. You Are Unfinished.
The pepper metaphor is sharp. When something becomes everywhere, it stops being special. Where this piece is strongest is its insistence on cognitive sovereignty. The ability to say: I understand enough to disagree. I can exit. I can choose differently. Without that, intelligence becomes ventriloquism.
Ai can be useful, protective and transparent, it can enlighten as opposed to dumb down, enrich as opposed to take… we re-filter the news, give it the temporal awareness it should have from the BBC and the like … no news story is independent of context… https://www.cogniosynthesisportal.uk/
This essay resonates deeply with work I've been doing on how AI might genuinely help humans become wiser. What you're describing here—the distinction between offloading that preserves deliberative capacity and offloading that outsources judgement itself—maps precisely onto what the British systems thinker Geoffrey Vickers called an "appreciative system."
Vickers saw that human institutions don't primarily pursue goals; they maintain relationships. An appreciative system has three components: extracting impressions from what's happening, processing those impressions through accumulated judgement, and returning the processed understanding to action. Your "second patient" is building exactly this—using AI to extract information, processing it through her own reading and reflection, and bringing that processed understanding to the doctor's consultation.
Your "first patient" hasn't built an appreciative system at all. She's outsourced extraction, processing, and decision, with no human appreciation in the loop.
This framing sharpens the argument. The question isn't primarily which cognitive functions to offload. It's whether you're building an appreciative system or merely extracting outputs. The appreciative system requires all three components—and when any is missing or entirely outsourced, you don't have appreciation. You have information without wisdom.
Recent research by Solé, Levin and colleagues adds a sobering dimension: they identify "humanbots"—dysregulated human–AI hybrids where feedback loops amplify errors rather than correcting them. Your Air France 447 example illustrates this precisely. The pilots had become humanbots with respect to hand-flying; their regulatory feedback had atrophied through disuse.
Thank you for this—it's rare to find someone thinking carefully about the structure of wise AI use rather than just listing dos and don'ts.
“Contemplation is not passive. It is the most intense activity the mind can perform. It means thinking about the highest things: the structure of reality, the nature of the good, and the order of the cosmos.”
The fascinating goal and activity (KDOT #1, #5, #6 - Cognitive Thermodynamics, Cognitive Mappings, Thought wheel ideas)
“Automation makes us safer while making rare emergencies devastating.”
The critical caution. Also consider NN Taleb’s “Black Swan” and Anti-fragile ideas. The first rule of tech is that it works until it doesn't. Every thing a a tradeoff.
“Offload mechanical cognition: the tedious, repetitive operations that don’t require judgment. Preserve core deliberative capacities: the ability to evaluate, choose, and reason through hard cases. Make sure evaluating AI outputs requires active engagement. Expand higher-level intellectual activity. And preserve cognitive sovereignty: maintain the ability to contest what the AI tells you, to understand how it reached its conclusions, and to exit when you need to.”
“We are designing intelligence environments for one another.”
Describes the goal of good thought spaces in different words. Parallels KDOT #7, #9 - Thought Space Expectations
We must foster environments where children learn that their "voice" is not just their ability to produce fluent output, but their ability to inhabit a conviction. Education should be the process of finding one's harmonic resonance... ant that is alignment between character and capability.
This text cuts cleanly because it names the real danger without theatrics.... not that tools make us stupid, but that misplaced offloading hollows the very muscles we will one day need to save ourselves.
The Air France example is brutal for a reason. It shows that competence isn’t binary, present or absent; it atrophies silently, politely, while everything seems fine. Automation didn’t kill those pilots. Disuse did. The tragedy is not error, but unrehearsed judgment under pressure.
“Permissible atrophy” is therefore the right phrase, and the uncomfortable one. Every system we design quietly decides what kind of humans it expects us to remain. Offload the mechanical, yes, but if we offload evaluation, contestation, and responsibility, we are not assisted... we are rehearsed into obedience. Fluency without authorship. Confidence without grounding. Parrots, as you say, dressed as thinkers.
The sharpest line is almost throwaway: these choices become the formal and informal rules that govern how we think. That is the real battlefield. Not AI versus humans, but which human capacities we treat as disposable...
Pepper becoming cheap was not the problem. Pepper losing its bite sure is. Intelligence embedded everywhere but owned nowhere produces comfort, not clarity. And comfort has never been the same thing as safety.
How we embed intelligence will determine who we become...but more precisely, what do we still know how to do, when the autopilot drops out and the sky goes quiet.
I liked the point about tolerating some atrophy, but being deliberate about why.
But I found the emphasis on contemplative paths as THE higher order pleasure a bit… paternalistic?
> We have freed people from drudgery before. Some wasted the freedom. Others turned toward higher things.
I just don’t think everyone wants to do knowledge work, even if they could. My hope is that rather than AI enabling people to do a specific activity that is pre-specified as ‘good’, AI helps people uncover their true values. That is, AI helps people with ‘valueception’. And I’m less confident than you that that everyone inherently values deep thinking, but it’s an open question.
This is a good essay about how AI could make individuals stupider through the atrophy of using their intellectual capacities. It also strikes a hopeful note on how AI could make every individual happier, more productive, and how the cumulative effects of these let’s say better off intellectually people, will enhance society. One of my thoughts is, it takes a certain kind of person to want to do the extra work. And most people are not that kind of person. The activity of the mind described in this article that arises while reading or thinking, is something that I enjoy very much and it’s why I read so much and almost never watch anything of the video category. It’s not that I don’t like those, or that I think there’s not a lot of high value high-quality content out there. That’s wrong. I do . But I’d rather be reading, and as long as I can read, I’d rather do that, saving the other things for sometime when perhaps I won’t be able to read. But we all know for a fact that most people aren’t like that. Most people even intellectuals or academics or people in my book club, they default it to sit down in front of a screen – sports which is pretty much mindless unless you’re way into the strategy, movies, I’m constantly getting recommendations for those from these people. And people who don’t have an educational interest or didn’t go to college or went to college and then never read a book – they’re never interested in the hard intellectual task. Some of those people are crafts people, and make things with their hands or play musical instruments, but again, I think those classes of people are dying breed. Why is there such a problem in high school and college with students using AI to write their papers? It’s because they don’t want to do the intellectual work. They’d rather be having fun, watching TV, being with their friends. So I think this article, although theoretically hopeful, is pretty much bullshit and most people are going to get dumber and a few people are going to have more fun climbing to hire heights. I hope you will think about this when you next decide whether to buy a book and read it to your grandchildren, or instead to just sit down and watch the patriots in the Super Bowl. OK – the Patriots in the Super Bowl, that’s justified for anybody, I might even do it although I bet I won’t. Now, if it were the Steelers, I’d for sure watch it. 
What a great perspective and I love the historical references. The auto pilot example is exceptionally crucial. I thought about this topic as well from the perspective of those who will offload cognition to the benefit of themselves and society and those who offload cognition to self detriment (as well as a societal deficit). This creates “winners”, “losers” and an intelligence wealth gap. We’ve seen this with social media (successful creators are winners but algorithm addicted consumers are losers). https://davidarmano.substack.com/p/intelligent-wealth-vs-cognitive-debt
“Consider someone who has experienced the satisfaction of a nice-tasting meal and the satisfaction of solving a difficult problem. If she is honest, she will rank the second higher. These provide a deeper satisfaction.
This is Mill’s ‘competent judges’ test. Ask anyone who has genuinely experienced both kinds of pleasure which they would give up. They will sacrifice bodily pleasures first. The fool may be content. But Socrates, even when dissatisfied, has something the fool will never have.”
If you swap out “a nice-tasting meal” for “best sex of your life” and “difficult problem” with “daily Sudoku on challenging mode”, the honest person would rank the former higher, and it provides a deeper (pun intended 😅) satisfaction.
In general, if the argument for the conclusion that cognitive offloading to AI can be okay relies on the premise that the so-called higher pleasures are always preferable to and better than the lower ones, then I would prefer to look for a different argument. That premise is just so deeply implausible, and probably rooted in views about the relationship between humans and (other) animals that are so alien to me that they seem ridiculous.
Very interesting argument. I'm grateful for the work of Cosmos Institute - thank you for elevating the kind of contemplation we need
As a (lapsed) pilot and flight instructor, the autopilot example is quire relevant. As the mom of a teen who is aspiring to become a professional pilot, it also makes me wonder: when is the human touch even in emergencies no longer needed?
As long as AI is fallible and emergencies can happen (the iced-over pitot tube) I'd never want to be a passenger in a totally AI-flown plane. Nor do I want to be a passenger in a plane that has auto-pilot dumbed down pilots--and I know from experience how easy it is to be complacent in the cockpit when everything goes well.
For me, when I was flying, the anti-dote was the pure personal joy that comes from melding my mind and body with this delicate machine. Hand-flying an instrument approach in actual conditions to the same standards as the autopilot does it is hard, and very satisfying in the moment (independent of the fact that I know it helps me fly the plane safely if the autopilot ever gives out).
I wonder if the antidote to the atrophy of thinking skills in the age of AI needs to start at a similar point--the joy of truly understanding something, of truly being clear, of making decisions that are well-informed with your own mind. Unfortunately, this is something our young people just don't experience in today's education system--and without experiencing it, how will they ever value it the way a competent pilot values the challenge of perfection in hand-flying?
And then: at some point, will the AI and automation be so good that a human's ability to fly in emergencies doesn't matter at all anymore? When will pilot jobs become obsolete? Probably sooner than we expect for remote cargo where no human lives are at risk, and maybe not for a long time because of our human-focused biases with passenger flight. (It's very different than driving: with a Waymo, if the machine breaks down, it just pulls over and you get out. With a plane, if the machine breaks down, there is no human back-up, and you're in the air, you're dead.)
Nice essay, particularly the invocation of Aristotle. It’s particularly fitting given the history of Greek philosophy’s transition from verbal to written memory.
What is most important here is delineating what uniquely human faculties are. In your medical example, the more conscientious actor (not explicitly but likely) felt some sort of moral agency and accountability over the eventual result. If we forecast out the increase in intelligence tasks AI is likely able to perform, more and more of that delineation will be moral decision-making.
What that means precisely and how cleanly those decisions are “human-made” is a worthwhile area of exploration (https://open.substack.com/pub/theslowpanic/p/other-chinese-rooms).
@Cosmos Institute and Kevin Vallier, the Mill/Aristotle frame is the right starting point. But the "offload the mechanical, preserve the deliberative" distinction is too clean.
Deliberation isn't just cognitive. It's somatic. The capacity to evaluate, to tolerate uncertainty, to stay present with difficulty long enough to actually think requires a nervous system settled enough to do the slow, effortful work. That capacity isn't preserved automatically when you offload the tedium. It's built through duration.
The Air France 447 example makes this point better than the essay acknowledges. Those pilots didn't just lose a skill. They lost the embodied capacity to respond under pressure. That's not cognitive atrophy. It's somatic atrophy.
The deeper problem: "cognitive sovereignty" assumes we can contest AI outputs. But contesting requires staying with information long enough to evaluate it. That capacity is being eroded not by offloading itself, but by the architecture of constant interruption that surrounds it. Platforms profit from keeping loops open. Completion ends the session.
You can't choose contemplation from a body that can't settle. The question isn't just what to offload. It's whether the systems we're building leave us with the capacity to deliberate at all.
I explored these themes in You Are Not Distracted. You Are Unfinished.
https://yauguru.substack.com/p/you-are-not-distracted-you-are-unfinished
The pepper metaphor is sharp. When something becomes everywhere, it stops being special. Where this piece is strongest is its insistence on cognitive sovereignty. The ability to say: I understand enough to disagree. I can exit. I can choose differently. Without that, intelligence becomes ventriloquism.
Ai can be useful, protective and transparent, it can enlighten as opposed to dumb down, enrich as opposed to take… we re-filter the news, give it the temporal awareness it should have from the BBC and the like … no news story is independent of context… https://www.cogniosynthesisportal.uk/
This essay resonates deeply with work I've been doing on how AI might genuinely help humans become wiser. What you're describing here—the distinction between offloading that preserves deliberative capacity and offloading that outsources judgement itself—maps precisely onto what the British systems thinker Geoffrey Vickers called an "appreciative system."
Vickers saw that human institutions don't primarily pursue goals; they maintain relationships. An appreciative system has three components: extracting impressions from what's happening, processing those impressions through accumulated judgement, and returning the processed understanding to action. Your "second patient" is building exactly this—using AI to extract information, processing it through her own reading and reflection, and bringing that processed understanding to the doctor's consultation.
Your "first patient" hasn't built an appreciative system at all. She's outsourced extraction, processing, and decision, with no human appreciation in the loop.
This framing sharpens the argument. The question isn't primarily which cognitive functions to offload. It's whether you're building an appreciative system or merely extracting outputs. The appreciative system requires all three components—and when any is missing or entirely outsourced, you don't have appreciation. You have information without wisdom.
Recent research by Solé, Levin and colleagues adds a sobering dimension: they identify "humanbots"—dysregulated human–AI hybrids where feedback loops amplify errors rather than correcting them. Your Air France 447 example illustrates this precisely. The pilots had become humanbots with respect to hand-flying; their regulatory feedback had atrophied through disuse.
Thank you for this—it's rare to find someone thinking carefully about the structure of wise AI use rather than just listing dos and don'ts.
Good article - parallel thinking expanded
Key phrases
“Contemplation is not passive. It is the most intense activity the mind can perform. It means thinking about the highest things: the structure of reality, the nature of the good, and the order of the cosmos.”
The fascinating goal and activity (KDOT #1, #5, #6 - Cognitive Thermodynamics, Cognitive Mappings, Thought wheel ideas)
“Automation makes us safer while making rare emergencies devastating.”
The critical caution. Also consider NN Taleb’s “Black Swan” and Anti-fragile ideas. The first rule of tech is that it works until it doesn't. Every thing a a tradeoff.
“Offload mechanical cognition: the tedious, repetitive operations that don’t require judgment. Preserve core deliberative capacities: the ability to evaluate, choose, and reason through hard cases. Make sure evaluating AI outputs requires active engagement. Expand higher-level intellectual activity. And preserve cognitive sovereignty: maintain the ability to contest what the AI tells you, to understand how it reached its conclusions, and to exit when you need to.”
“We are designing intelligence environments for one another.”
Describes the goal of good thought spaces in different words. Parallels KDOT #7, #9 - Thought Space Expectations
We must foster environments where children learn that their "voice" is not just their ability to produce fluent output, but their ability to inhabit a conviction. Education should be the process of finding one's harmonic resonance... ant that is alignment between character and capability.
This text cuts cleanly because it names the real danger without theatrics.... not that tools make us stupid, but that misplaced offloading hollows the very muscles we will one day need to save ourselves.
The Air France example is brutal for a reason. It shows that competence isn’t binary, present or absent; it atrophies silently, politely, while everything seems fine. Automation didn’t kill those pilots. Disuse did. The tragedy is not error, but unrehearsed judgment under pressure.
“Permissible atrophy” is therefore the right phrase, and the uncomfortable one. Every system we design quietly decides what kind of humans it expects us to remain. Offload the mechanical, yes, but if we offload evaluation, contestation, and responsibility, we are not assisted... we are rehearsed into obedience. Fluency without authorship. Confidence without grounding. Parrots, as you say, dressed as thinkers.
The sharpest line is almost throwaway: these choices become the formal and informal rules that govern how we think. That is the real battlefield. Not AI versus humans, but which human capacities we treat as disposable...
Pepper becoming cheap was not the problem. Pepper losing its bite sure is. Intelligence embedded everywhere but owned nowhere produces comfort, not clarity. And comfort has never been the same thing as safety.
How we embed intelligence will determine who we become...but more precisely, what do we still know how to do, when the autopilot drops out and the sky goes quiet.
I liked the point about tolerating some atrophy, but being deliberate about why.
But I found the emphasis on contemplative paths as THE higher order pleasure a bit… paternalistic?
> We have freed people from drudgery before. Some wasted the freedom. Others turned toward higher things.
I just don’t think everyone wants to do knowledge work, even if they could. My hope is that rather than AI enabling people to do a specific activity that is pre-specified as ‘good’, AI helps people uncover their true values. That is, AI helps people with ‘valueception’. And I’m less confident than you that that everyone inherently values deep thinking, but it’s an open question.
This is a good essay about how AI could make individuals stupider through the atrophy of using their intellectual capacities. It also strikes a hopeful note on how AI could make every individual happier, more productive, and how the cumulative effects of these let’s say better off intellectually people, will enhance society. One of my thoughts is, it takes a certain kind of person to want to do the extra work. And most people are not that kind of person. The activity of the mind described in this article that arises while reading or thinking, is something that I enjoy very much and it’s why I read so much and almost never watch anything of the video category. It’s not that I don’t like those, or that I think there’s not a lot of high value high-quality content out there. That’s wrong. I do . But I’d rather be reading, and as long as I can read, I’d rather do that, saving the other things for sometime when perhaps I won’t be able to read. But we all know for a fact that most people aren’t like that. Most people even intellectuals or academics or people in my book club, they default it to sit down in front of a screen – sports which is pretty much mindless unless you’re way into the strategy, movies, I’m constantly getting recommendations for those from these people. And people who don’t have an educational interest or didn’t go to college or went to college and then never read a book – they’re never interested in the hard intellectual task. Some of those people are crafts people, and make things with their hands or play musical instruments, but again, I think those classes of people are dying breed. Why is there such a problem in high school and college with students using AI to write their papers? It’s because they don’t want to do the intellectual work. They’d rather be having fun, watching TV, being with their friends. So I think this article, although theoretically hopeful, is pretty much bullshit and most people are going to get dumber and a few people are going to have more fun climbing to hire heights. I hope you will think about this when you next decide whether to buy a book and read it to your grandchildren, or instead to just sit down and watch the patriots in the Super Bowl. OK – the Patriots in the Super Bowl, that’s justified for anybody, I might even do it although I bet I won’t. Now, if it were the Steelers, I’d for sure watch it. 
What a great perspective and I love the historical references. The auto pilot example is exceptionally crucial. I thought about this topic as well from the perspective of those who will offload cognition to the benefit of themselves and society and those who offload cognition to self detriment (as well as a societal deficit). This creates “winners”, “losers” and an intelligence wealth gap. We’ve seen this with social media (successful creators are winners but algorithm addicted consumers are losers). https://davidarmano.substack.com/p/intelligent-wealth-vs-cognitive-debt
I feel like we are starting to calm down about the fever.
https://paulruth.substack.com/p/lets-stop-pretending-that-ai-is-new?r=atg1
I offered a different but also clear view how AI can be useful. We should not be scared because people make the choices.
https://paulruth.substack.com/p/lets-stop-pretending-that-ai-is-new?r=atg1
I was on board until this point:
“Consider someone who has experienced the satisfaction of a nice-tasting meal and the satisfaction of solving a difficult problem. If she is honest, she will rank the second higher. These provide a deeper satisfaction.
This is Mill’s ‘competent judges’ test. Ask anyone who has genuinely experienced both kinds of pleasure which they would give up. They will sacrifice bodily pleasures first. The fool may be content. But Socrates, even when dissatisfied, has something the fool will never have.”
If you swap out “a nice-tasting meal” for “best sex of your life” and “difficult problem” with “daily Sudoku on challenging mode”, the honest person would rank the former higher, and it provides a deeper (pun intended 😅) satisfaction.
In general, if the argument for the conclusion that cognitive offloading to AI can be okay relies on the premise that the so-called higher pleasures are always preferable to and better than the lower ones, then I would prefer to look for a different argument. That premise is just so deeply implausible, and probably rooted in views about the relationship between humans and (other) animals that are so alien to me that they seem ridiculous.