Authors vs. Characters: The New Class Divide
Will AI sort humanity into two kinds of people?
Two children are looking at screens.
One has an infinite iPad: videos, feeds, colors, and recommendations carefully designed to ask nothing of her other than her attention. The other has an AI tutor: patient, demanding, adaptive, and often hard work. It asks her what she thinks and why one answer is better than another.
It’s the same rectangle and the same general class of technology, but it is doing opposite things to the child. That is the divide I care about: how AI deployed two ways can form two different people.
We adults are being sorted as well. The radiologist who reads each scan themselves before checking the model versus the one who just defers to the model. The citizen who asks AI to steelman the candidate they dislike and argues with what they find versus the one who skims the AI summary, nods, and votes.
From the outside, we’ll hardly notice the difference between these classes of people. The person outsourcing judgment may even look better. Faster, more fluent, more productive. More agentic, if you like the word. But from the inside, something is being hollowed out, and they are the last to know.
I call these two types of people authors and characters. Some people will think with these systems. Through others, the systems will think.
I’ve been trying to work out where exactly the line falls and what might push people across it. Here are three places I keep getting stuck.
1. Does AI extend thought, or preempt it?
The mathematician Alfred North Whitehead wrote that civilization advances “by extending the number of important operations we can perform without thinking about them.”
Writing extended memory, numerals extended computation, and maps extended navigation. In each of these cases and many more, humans externalize an operation and free up capacity for whatever sits above it. The externalized capacity tends to atrophy, but historically, that trade has been worth it.
The natural question is whether AI continues this pattern or hits a stopping point. Is deliberation the last layer, or is there something above it?
And is there a particular danger in offloading the capacity that decides what else can be offloaded? The driver who uses cruise control on the highway but not in city traffic is using judgment to decide where judgment can be delegated. Many delegations work this way: a choice that does the work of later choices. But when the offloaded capacity is also the one that governs offloading, the loop closes, and no judgment is left to evaluate the system from outside it.
I don’t know whether deliberation is one clean faculty perched atop the rest. I’m not even sure whether it’s one thing. It seems to include attention, imagination, comparison, inhibition, and the ability to give and receive reasons. And prior tools have shaped all of them. Writing changed memory and reflection just as scientific instruments changed what counted as evidence. So if I say AI is different because it offloads deliberation, a critic can ask whether I have just defined deliberation as the sacred remainder: whatever earlier tools had not yet touched.
What if the better question is not what AI offloads but where it sits? AI becomes dangerous when it occupies the first-mover position in thought: proposing the questions, framing the options, and drafting our answers, leaving the person to merely react.
By first mover I don’t mean first in time. Human thought is messier than that. I mean first in the order of practical dependence: the system supplies the structure, and the person’s reasoning unfolds within it. The person is still reasoning, still pushing the button, but doing so from within a structure the system supplied.
This is hard to see. To you, the system’s outputs do not feel like advice arriving from somewhere else. Instead, they feel like your own next thought only earlier and more clearly articulated. You inherit a position without knowing you have inherited it. There is nothing to push back against, because nothing seems to have been pushed. AI now sits in the position of an inner voice.
AI does not have to be deployed as a first-mover, but often it is. This is what AI becomes when it ships as the default consumer assistant: friction-minimized, personalized, always on, and eager to assist. Other design choices would not produce it.
2. When does help become tutelage?
Parents choose before children can choose, teachers frame a subject before students can judge the frame, and traditions hand us our values before we can inspect them. They all go first. This is the authority version of the first-mover problem.
We do not call this domination. We call it being raised, being educated, or being cared for. Some forms of going first are how self-rule gets built in the first place.
When AI is the first mover, does it build self-rule or wear it down?
The strongest version of the objection is that sometimes authority is legitimate because it helps me act on reasons that already apply to me in a manner that is better than I could manage on my own. A doctor catches symptoms we would miss and a lawyer spots loopholes in contracts that we might think look fine. If that is right, the fact that AI goes first is not enough to make it illegitimate.
I need the pilot to land the plane, not to educate me in aviation. That kind of authority — episodic and outcome-directed — is fine in many situations. Formative authority over the kinds of people I am able to become is different. When an authority repeatedly mediates the reasons by which I govern myself (what counts as a good question, what answers are reasonable, and what risks are worth taking), legitimacy cannot be exhausted by immediate correctness. It has to preserve my future capacity to reason for myself.
Perhaps the test is whether the authority remains answerable. By answerable, I mean it can be questioned, corrected, outgrown, reinterpreted, or, at the limit, left. A parent remains answerable to her child, who eventually becomes a peer. A doctor remains answerable to her patient. A living tradition remains answerable through interpretation, reform, internal argument, and ultimately exit.
The default consumer-assistant version of AI as first mover is not answerable. It can be queried, but querying is not the same as answerability. It does not mature into a peer, accept correction as a participant in a shared practice, or belong to a living tradition that can be reinterpreted from within.
A person may endorse dependence at every step, with no coercion required. But because every endorsement is shaped, the kind of endorsement matters. The shaping is legitimate when it strengthens future capacity to revise. It is degrading when it secures the person’s present endorsement by weakening that capacity.
Capacity has another dimension worth naming: standing. This is the position in a practice that lets you refuse, challenge, teach, repair, or help set the standards. Some capacities matter not only because AI might fail or become unavailable, but because they confer this kind of standing. The radiologist who can still read scans stands differently in relation to the institution than the one who cannot. The same is true of citizens whose political judgment is mediated by systems they cannot understand, contest, or refuse — their legal right to participate persists while their capacity to use it decays.
Most theories of authority ask whether a relationship is legitimate at a given moment. But AI is a trajectory problem: at each step, the help offered looks reasonable to the user, with the harm appearing only over time. An early snapshot looks indistinguishable from a late one. The difference is the trajectory, and trajectories are hard to see from inside the house that they have built.
Alexis de Tocqueville introduced the idea of the tutelary power: an authority that does not tyrannize but “compresses, enervates, extinguishes, and stupefies.” It’s the dystopian version of being all watched over by machines of loving grace. In the Tocquevillian singularity, better and more personalized AI systems make us more dependent on better and more personalized AI systems. The loop tightens until there’s no standpoint left to push back from.
3. Is convenience destiny?
Most arguments about AI assume convenience is destiny, but the historical record doesn’t bear this out. After all, why do marathons exist in a world with cars?
To be clear, most difficulty should be minimized: bureaucratic friction, status games, needless scarcity, and administrative maze-work. Much labor, be it intellectual or physical, does not make us better or stronger or wiser. But some difficulty is formative.
How do we keep this kind alive when easier options are on the table? What cultural counterforces keep the path of least resistance from becoming the only option available to us? Some have worked, like anti-smoking attitudes that won after sixty years of institutional work. Slow Food didn’t defeat fast food, but it made slow eating a mark of taste rather than backwardness, proof that a convenience culture can produce a counterculture with teeth. The school-phone movement is doing this for adolescents through voluntary association at the school level, rather than asking each parent to resist alone.
Television, sugar, fast food, the smartphone — most other conveniences reorganized daily life without serious opposition. Most counterforces fail.
Successful cases seem to share a few traits: visible victims, concrete alternatives, status reversal, thick institutions, and early timing before convenience hardens into infrastructure. Status reversal matters because it makes the harder thing look prestigious instead of backward. Thick institutions like schools, religious communities, and professions matter because they can change the default environment for everyone inside at once.
AI counterforces fit all of these traits badly. The victims are diffuse, losing capacities too slowly to notice. The alternative practices are not yet legible to most users, and status considerations still favor convenience and speed over slow formation. Many institutions are either abdicating their role or hardening in ways that push beneficial AI use underground. The infrastructure is hardening fast.
If convenience is destiny, the diagnosis is the whole story and the prescriptions are decoration. If it is not, the central question becomes which cultures preserve formative practices and how.
The line AI is drawing will not run only between groups. It will run through each of us, through every place where judgment is practiced, delegated, strengthened, or surrendered.
I think that line is real, but I do not yet know how to draw it cleanly. Help me find it.
Cosmos Institute is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund AI prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.



Wonderful piece. I think a lot about these ideas as well. “Are we using the machines or are the machines using us?”
More seriously - what does it matter? Why does it matter? And, how does it impact both outcomes and the experience of being human.
Very good analysis. I think we need to add another, and possibly the major, divide between humans and AI. Humans are embodied from beginning to end, and intelligence emerges from a hugely complex interplay of sensation, experience, physiological, emotional, social and neurological phenomena. Cognition is just a thin frothy layer where we interface with and try to make sense of the cacophony of living. We are not simulated characters in a matrix - we are made of flesh and blood. As AI development proceeds, many humans may choose, and many more may simple accede, to live perpetually in an AI simulation of intelligence and give up their humanity. But I m hopeful most will eventually choose to embrace the embodied-ness of the human experience and use AI for what it is - an incredibly valuable but exceedingly dangerous tool that should be put back in the tool shed when not being used.