Is algorithmic mediation always bad for autonomy?
We make technology, and technology makes us back
Life is a navigation problem. There are too many places to go and too many paths to take, too many things to try and too many skills to learn. For many of us, choice can be paralyzing. The ancients meditated on what sort of life we ought to live, but today we wonder how best to spend our time when faced with an endless supply of ideas and experiences. One way we solve the problem is by asking for help. We speak to our friends about the music they like and browse lists on the internet about the best new books.
We get recommendations that help us determine how best to spend our time. But recommendations are no longer the preserve of humans; in fact, the vast majority of them – at least as it relates to media or travel – are already provided by machines. What movie to watch or what route to take are questions answered by AI systems designed to keep you coming back for more. These systems remind us that technology pervades our lives, that we make decisions under conditions that we don’t always choose.
On one level, this process is essential for the work of civilization. Alfred North Whitehead famously argued that society moves forward by “extending the number of important operations which we can perform without thinking about them.” When I make coffee in the morning I don’t think about grinding beans with a mortar and pestle; I just press a button and wait for the machine to do its work.
We can think about technology as the mechanism by which humans externalize their capacities. In the 1967 book The Myth of the Machine, the American historian Lewis Mumford argues technology can “selectively organize and consciously direct both the internal and external agents of the mind.” We might say that technology is both instrumental (tools we use to achieve ends) and prosthetic (things we use to manifest our abilities in the world). But it also shapes the character of cognition, transforming our capacities as they relate to the technological environment that we inhabit.
Writing shifted the burden of recall from the rhythms of oral tradition to marks in clay or ink. The abacus relieved the mind from the strain of calculation, altering sequences of thought into patterns of beads that can be physically manipulated. Later, the compass and the map stretched human orientation across oceans to allow travellers to project themselves into spaces they could not directly perceive. Technology extends human power into matter so that action and thought can be carried further than the body or mind would allow, while unlocking new forms of cognitive activity that were previously out of reach.
For Plato, technology can be read as a form of craft knowledge, a discipline that shaped both product and practitioner. The cobbler’s téchnē produced shoes, but it also cultivated habits of judgment about fit, durability, and beauty; the navigator’s téchnē guided ships, but it also demanded an attunement to winds, stars, and currents. Here technology is something like a training ground, a set of practices that form the character of those who wield them. Technology externalizes human capacity, but it also bends those faculties back towards us by fostering new dispositions and habits.
Put differently: we make technology and technology makes us back.
Making and breaking autonomy
Autonomy can be understood as the cultivated capacity to deliberate well about how to live, to revise one’s understanding through experience, and to act on one's own judgment within a community that recognizes this same capacity in others. Humans cultivate autonomy through practice and defend it against erosion, which requires the freedom to make choices and the discipline to reflect on their consequences.
Technologies free attention for higher tasks while modifying the limits within which choices are made. Autonomy is bolstered when external aids strengthen the conditions for self-rule, but it begins to wither when those same aids diminish the faculties they are meant to support. Every new tool is in some sense both liberating and constraining, expanding our reach while altering the kind of selves we are able to become. They benefit us when they maximise our capacity for deliberation, understanding, and judgement, and they hamper us when they stymie these same faculties.
We might say that technology mediates our interactions according to a gradient of autonomy preserving (or autonomy degrading) forms:
Technology that augments us. Technologies that augment us operate by extending human faculties while leaving intact the basic work of judgment. They amplify what we can already do, stretching the reach of our bodies and minds beyond their natural limits.
Technology that configures us. Technologies that configure us arrange our field of action. They shape the choices available, the order in which they appear, and the ease with which one path can be taken over another.
Technology that simulates us. Technologies that simulate us begin to imitate the processes of judgment we once reserved for ourselves. They generate candidate answers, options, or arguments that resemble the products of human deliberation.
Technology that replaces us. Technologies that replace us close the distance between simulation and action. They no longer propose options for us to consider but execute decisions on our behalf, carrying out tasks in a way that sidelines human deliberation.
These categories describe dominant patterns, not exclusive kinds. All technologies augment and configure us; the wheel augments locomotion while influencing settlement patterns and trade routes; the clock augments coordination while regimenting daily life. Some, like the abacus, may augment our mathematical proficiency, configure economic exchange, and simulate working memory. A human may still have to ratify the result, but in doing so these types of technologies may dampen autonomy if the operator chooses to continually outsource cognitive labor.
Replacement is a different beast. Early flight instruments augmented the pilot’s senses and configured cockpit routines; later systems simulated decision rules by offering candidate corrections for altitude or heading. But once autopilot was installed, those corrections were carried out automatically, with the pilot relegated to a supervisory role. Recommendation engines may simulate taste, but autonomous content moderation agents go further by making and executing decisions at speeds and scales that escape direct oversight.
But we know that pilots still know how to fly, and that human operators still set the rules for content-checking models and review decisions on request. In this context, the faculty isn’t fully degraded so much as reorganized into new layers like writing rules, setting boundaries or reviewing edge cases (and we realise the benefit in terms of safer air travel or content-checking at scale). Autonomy contracts at the moment of execution but reasserts itself in design and oversight. It persists in a new and different form.
But is this enough for true autonomy?
Returning to our definition, we could say that the problem with replacement is that it interrupts the conditions under which autonomy is cultivated. To deliberate well, one has to practice deliberation; to revise one’s understanding, one has to encounter feedback from experience; to act on one’s judgment, one has to be allowed to make judgments in the first place. Replacement is troubling because it risks producing agents who retain the status of autonomous individuals but lack the habits that autonomy requires.
This is where technologies that replace autonomy depart from those that simulate it. Unlike replacement, the act of simulation does not displace judgment but reframes what it means to judge. When options arrive already filtered, scored, or composed, our role becomes one of selection rather than origination. Simulation still lets us act on our own judgment — the choice is ours — but it weakens the practice of deliberation. We don’t wrestle with problems from first principles. Instead, we are handed plausible answers that shape the frame of thought before we begin. Feedback from experience is still present, but it is feedback as it relates to selection among simulations rather than on the process of generating judgments ourselves.
Systems thinking and thinking systems
One way to think about whether or not a technology is autonomy preserving is to compare it to other structures that mediate human activity. After all, every one of us lives inside systems. Law channels disputes into courts, education organises people through schools and curricula, and markets coordinate activity through prices. Each system creates an environment where some decisions are easier, others harder, and some ruled out altogether.
Max Weber’s account of bureaucracy articulates how systems of rules and procedures both enable and constrain human action. A bureaucracy strips away arbitrariness by offering predictability and fairness, but it also narrows the range of acceptable behavior in a manner that makes it harder to act outside its prescribed channels. It draws into focus the double edge of systematization: autonomy is enabled in the sense that individuals can plan and act with security, but diminished in that those actions must follow the logic of the organization.
If we apply this idea to our present moment, we might say that AI systems that augment and configure judgement are better placed to preserve human autonomy than those that simulate or (in the most troubling instance) replace it. That all said, configuration itself is a double-edged sword: in some cases the limits on our choices are needed to avoid choice paralysis and prevent the most harmful outcomes, but equally some directed efforts at structuring from a third party (e.g., autoplay or default permissive privacy settings) risk infringing on autonomy.
But which is AI? Artificial intelligence is a technology that takes the shape of its container, a kind of thing that exists in a bewildering variety of contexts and configurations. The most commonly used is the recommender system, which underwrites the five hours a day that people spend on social media, streaming video, and music platforms globally. Even assuming that only about two-thirds of this time is genuinely steered by recommendation engines, that tells us that roughly one-fifth of our waking life is already mediated by algorithms. And this excludes other domains where AI configures our behavior, from navigation systems and online shopping to the automated curation of news feeds and adverts.
Recommender systems simulate us by modelling our tastes and predicting what we might want before we know it ourselves. They generate candidate options — be they songs, films or products — that resemble the kinds of choices we would make had we the time or patience to think for ourselves. At the same time, these systems configure us by shaping the environment in which decisions are taken. The catalogue of media is vast, so we are necessarily provided with a handful of options that obey the system’s logic of retention. Autonomy is shaped first by the simulations of what we might like, and second by the architectures that govern how and when those simulations appear.
Large language models are more ambiguous than recommender systems because their primary effect seems to be to augment rather than constrain. They can expand the reach of a researcher by drafting summaries, or stretch the range of a programmer by generating code snippets. In these moments the technology amplifies human faculties, taking what we can already do and carrying it further without fixing the outcome in advance. Yet the same systems also simulate us, producing judgments, arguments, and suggestions that imitate our own reasoning. We might even say these outputs are representative of the replacement model, depending on the extent to which users are scrutinizing the final output.
These types of systems can in principle be used to recommend anything you can think of. Songs, products, and films are all in play, but so too are those related to how to think, how to respond, and how to live. The saving grace is that users tend to directly ask language models for these recommendations, rather than in the case of other types of recommender systems that tend to automatically provide suggestions without us consciously deliberating about the kind of recommendation we want.
But as models grow more capable, or become agents that can act directly in the world, the systems become better placed to challenge human autonomy. An AI model that not only drafts a contract but sends it, or not only writes code but deploys it, has little need for human deliberation (except insofar as it manifests in the oversight of these systems). Today, large models are augmentative in theory but often simulative in practice. Tomorrow, they may threaten the conditions under which autonomy is cultivated by making decisions with limited human input.
Algorithmic futures
Freedom is finite because we inhabit systems that guide our lives. That is as true for us as it was for the ancients. Aristotle understood freedom as the capacity to live well through the exercise of phronēsis, the practical wisdom cultivated by experience. To deliberate wisely, one must encounter situations, weigh alternatives, and learn from the consequences of one’s choices. Autonomy, in this sense, grows as habits of reflection are strengthened and withers when they are not. Technologies that augment or configure us can support this process by extending our capacities or freeing us from thinking about the minutia of everyday life. But when technologies begin to simulate or replace us, they risk undermining the very practices through which phronēsis is formed.
Even Greek philosophers were mediated by technology: Aristotle’s Peripatetic school was dependent on the scrolls, maps, and instruments that configured inquiry, giving shape to what could be known and transmitted. Philosophies of reason and the good life existed within the architectures of inscription and classification that made this mode of thought possible. Try as we might, there is no escape from the systems that shape how we live in the world.
But AI represents a different challenge, with each move towards models that do the thinking for us threatening to chip away at the foundations of autonomy. The path ahead involves recognizing its potential for structuring, simulating, and replacing our capacities, while reflecting on the types of things we want to build. This is not a question of whether mediation itself is bad; we have, after all, always lived within structures that shape our choices. Rather, the risk is that AI extends the powers of mediation into the realm of deliberation itself.
That’s the slow, difficult work of becoming the kind of person who knows how they should live, a job that only one person is qualified for.
Cosmos Institute is the Academy for Philosopher-Builders, with programs, grants, events, and fellowships for those building AI for human flourishing.
I appreciate the holistic view reflected in this article, it is an area of deep research and development for me. I find that the algorithmic mechanisms referred to are game theoretic more than technological, they are essentially voting algorithms, which both Democracy and social media share in common. LLMs come pre-quipped with this "un-holistic" mechanism design and faulty dialectics. By un holistic I mean internally competitive. So I interpret your article as an appeal to better games, human to human and human to AI.
How old are you?
https://kevinhaylett.substack.com/p/the-dangers-of-ai-may-not-be-what