Brilliant essay, Brendan. Your argument about autonomy versus agency — and the constitutional drift from person to system — immediately brought to mind
Plato's Myth of Er in Republic Book X.
The soul that chose first had lived virtuously in an orderly society, but "by habit without philosophy" (619c-d). He'd never had to develop discernment because his environment had always done the discerning for him. So when confronted with the full buffet of lives, he grabbed the greatest tyranny without examining it — not even noticing it contained the fate of devouring his own children. He is your Choice Engine user after a decade of delegation: welfare maximized at every decision point, evaluative muscles never exercised, and when the moment finally demands independent judgment, he reaches for a faculty that isn't there.
Then there's Odysseus, who chose last. Forged by Circe's temptations, the Sirens, Calypso's offer of immortality — years of painful, consequence-bearing choosing. He picks the quiet life of a private man that everyone else had passed over. Low welfare by external metrics. But the most autonomous choice in the myth, because it came from a soul that actually knew itself.
Plato's point, which I think deepens yours considerably: the capacity for self-rule isn't an information state that AI can replicate or supplement. It's a developmental achievement that requires precisely the struggle optimization seeks to eliminate. The first soul's society had done for him exactly what Sunstein's liberal Choice Engine promises to do for us — and it produced a man perfectly free in every formal sense who chose slavery the moment the architecture was removed.
Plato saw this coming twenty-four hundred years ago.
— Chester H. Sunde, Psy.D., author of Platonomy: Ancient Wisdom for the Modern Self (Archway, 2025)
Love that: "Plato's point, which I think deepens yours considerably: the capacity for self-rule isn't an information state that AI can replicate or supplement. It's a developmental achievement that requires precisely the struggle optimization seeks to eliminate."
Loved this essay. I think it pairs well with Joan Didion’s On self-dignity. Also, Auden’s The Unknown Citizen.
I have been using lots of AI tools lately I work, and realized this afternoon how I was (unknowingly) delegating an strategic decision to it, because there was friction around making a mistake and talking with an LLM is a good way to pospone facing the uncertainty of its outcome. These tools are so powerful and it’s easy to be self-deceptive about our relationship with them. This piece is going to make me think for a while. Thanks!
One of the most nuanced takes on the dangers of outsourcing the ability to choose. Thank you Brendan for this! Helped me introspect some of my own views and choices.
You make the case for what I have done more efficiently: continue to ignore him (Sunstein). I credit the clearness of my thinking to never having read a word he has written.
@Brendan McCord, this is one of the best essays I've read on AI and autonomy. You name what most AI criticism misses: the damage isn't in the bad outputs. It's in the good ones. The ones that work so well we stop doing the work ourselves.
I'm a therapist, and I see this in the room. The capacity you describe, what Mill calls "discriminative feeling," is not an abstraction. It lives in the body. We build it by sitting with uncertainty long enough for judgment to form. By tolerating the discomfort of not knowing what we want. By staying in the friction rather than reaching for the clean answer.
AI offers relief at the exact moment development requires staying. The output improves. The capacity shrinks. You call this "constitutional drift." In clinical terms, it is the industrialization of avoidance.
I recently wrote about this from the somatic side in The Splitting Machine: AI and the Failure of Integration
Fascinating but I am not convinced. Behavioural economics arose from the work of Thaler and Sunstein, which in turn stemmed from the work of Kahneman and Tversky.
We are not homo Economicus, but irrational beings and as Nudge showed, we can be helped to make better decisions without being turned into automatons.
Your case is well and cogently argued as I would expect; you just take it too far in my view.
Community Intelligence is a better term for AI. We are all biased by our communities. Mindfully choosing the settings of your AI is certainly as important as how you relate with your peopled community.
Great piece and was really thought provoking! I had not realized the crucial agency/autonomy distinction before.
It seems to me that the silver lining here is that one can deliberately choose which parts of life one wants to increase one's autonomy in and then delegate the other parts to an agent. Although this can be a slippery slope and I wont be surprised if we might end up having an "intellectual obesity" epidemic next.
Free will is what defines humanity as well as our universe. There's an alternative to misusing AI and being ruled by technocracy; it's called direct digital democracy where each person votes directly on each issue -possible for the first time in history! A complete opposite of what you described. And a completely different outcome.
Very interesting drop. What would Hayek make of using his writings to make the case for losing autonomy with the goal of better decisions? I think his writings in Fatal Conceit would argue against such usage. He leans into the mystery of human behavior in making decisions that do not benefit themselves but do benefit generations that succeed them, linking it to monotheism. While he shrouds what he is referring to, I think he would approve of Richard Dawkins sentiment for being a "Cultural Christian." Looking at the Lord's Prayer through a Girardian lens, making mistakes is what makes us human. Some mistakes turn into cultural gains and others into tragedies, but they all make us human. Without our frailties, we will be the egg that becomes rotten because it never hatches.
Great piece, Brendan, thanks!
thank you Zena!!
Brilliant essay, Brendan. Your argument about autonomy versus agency — and the constitutional drift from person to system — immediately brought to mind
Plato's Myth of Er in Republic Book X.
The soul that chose first had lived virtuously in an orderly society, but "by habit without philosophy" (619c-d). He'd never had to develop discernment because his environment had always done the discerning for him. So when confronted with the full buffet of lives, he grabbed the greatest tyranny without examining it — not even noticing it contained the fate of devouring his own children. He is your Choice Engine user after a decade of delegation: welfare maximized at every decision point, evaluative muscles never exercised, and when the moment finally demands independent judgment, he reaches for a faculty that isn't there.
Then there's Odysseus, who chose last. Forged by Circe's temptations, the Sirens, Calypso's offer of immortality — years of painful, consequence-bearing choosing. He picks the quiet life of a private man that everyone else had passed over. Low welfare by external metrics. But the most autonomous choice in the myth, because it came from a soul that actually knew itself.
Plato's point, which I think deepens yours considerably: the capacity for self-rule isn't an information state that AI can replicate or supplement. It's a developmental achievement that requires precisely the struggle optimization seeks to eliminate. The first soul's society had done for him exactly what Sunstein's liberal Choice Engine promises to do for us — and it produced a man perfectly free in every formal sense who chose slavery the moment the architecture was removed.
Plato saw this coming twenty-four hundred years ago.
— Chester H. Sunde, Psy.D., author of Platonomy: Ancient Wisdom for the Modern Self (Archway, 2025)
Love that: "Plato's point, which I think deepens yours considerably: the capacity for self-rule isn't an information state that AI can replicate or supplement. It's a developmental achievement that requires precisely the struggle optimization seeks to eliminate."
Loved this essay. I think it pairs well with Joan Didion’s On self-dignity. Also, Auden’s The Unknown Citizen.
I have been using lots of AI tools lately I work, and realized this afternoon how I was (unknowingly) delegating an strategic decision to it, because there was friction around making a mistake and talking with an LLM is a good way to pospone facing the uncertainty of its outcome. These tools are so powerful and it’s easy to be self-deceptive about our relationship with them. This piece is going to make me think for a while. Thanks!
Love it—thank you for both recs
One of the most nuanced takes on the dangers of outsourcing the ability to choose. Thank you Brendan for this! Helped me introspect some of my own views and choices.
Glad to be following Cosmos
A fine articulation of the emerging gospel of the Claude Boys.
“In Claude we trust”
You make the case for what I have done more efficiently: continue to ignore him (Sunstein). I credit the clearness of my thinking to never having read a word he has written.
Absolutely incredible essay that brilliantly articulates the true cost of cognitive offloading. Thank you!
@Brendan McCord, this is one of the best essays I've read on AI and autonomy. You name what most AI criticism misses: the damage isn't in the bad outputs. It's in the good ones. The ones that work so well we stop doing the work ourselves.
I'm a therapist, and I see this in the room. The capacity you describe, what Mill calls "discriminative feeling," is not an abstraction. It lives in the body. We build it by sitting with uncertainty long enough for judgment to form. By tolerating the discomfort of not knowing what we want. By staying in the friction rather than reaching for the clean answer.
AI offers relief at the exact moment development requires staying. The output improves. The capacity shrinks. You call this "constitutional drift." In clinical terms, it is the industrialization of avoidance.
I recently wrote about this from the somatic side in The Splitting Machine: AI and the Failure of Integration
https://yauguru.substack.com/p/the-splitting-machine-ai-and-the?r=217mr3
Fascinating. Love the piece!
Fascinating but I am not convinced. Behavioural economics arose from the work of Thaler and Sunstein, which in turn stemmed from the work of Kahneman and Tversky.
We are not homo Economicus, but irrational beings and as Nudge showed, we can be helped to make better decisions without being turned into automatons.
Your case is well and cogently argued as I would expect; you just take it too far in my view.
Community Intelligence is a better term for AI. We are all biased by our communities. Mindfully choosing the settings of your AI is certainly as important as how you relate with your peopled community.
Great piece and was really thought provoking! I had not realized the crucial agency/autonomy distinction before.
It seems to me that the silver lining here is that one can deliberately choose which parts of life one wants to increase one's autonomy in and then delegate the other parts to an agent. Although this can be a slippery slope and I wont be surprised if we might end up having an "intellectual obesity" epidemic next.
Brilliant insight and depth of discernment. Many thanks for your timely piece and work in this field. Much appreciated.
Free will is what defines humanity as well as our universe. There's an alternative to misusing AI and being ruled by technocracy; it's called direct digital democracy where each person votes directly on each issue -possible for the first time in history! A complete opposite of what you described. And a completely different outcome.
Very interesting drop. What would Hayek make of using his writings to make the case for losing autonomy with the goal of better decisions? I think his writings in Fatal Conceit would argue against such usage. He leans into the mystery of human behavior in making decisions that do not benefit themselves but do benefit generations that succeed them, linking it to monotheism. While he shrouds what he is referring to, I think he would approve of Richard Dawkins sentiment for being a "Cultural Christian." Looking at the Lord's Prayer through a Girardian lens, making mistakes is what makes us human. Some mistakes turn into cultural gains and others into tragedies, but they all make us human. Without our frailties, we will be the egg that becomes rotten because it never hatches.
This is what it feels like if your salience is steered. Account specific salience algorithms