The nineteenth century was a difficult time to be a woman. For them, education was limited, professions largely closed, and political rights nonexistent. Under England’s common law doctrine of coverture, a woman’s property and earnings belonged to her husband. Single women could own property and make agreements, but once a woman married she forfeited those rights. In legal terms she was “covered” by her spouse.
It was an unjust settlement, one made particularly galling by the social climate of the day. Liberal constitutions spread across Europe and slavery was abolished in England in 1833. Men marched under banners of progress, but women were told to stay obedient; factories and railways promised speed and abundance, but half the adult population was bound by laws that treated them like children.
John Stuart Mill was one of those who pried open the contradiction. In close partnership with Harriet Taylor, Mill pushed for women’s right to property, independent earnings, and access to education. In their 1832 work On Marriage the pair wrote that surrendering a woman’s legal identity and property amounted to a form of “domestic slavery,” while in The Enfranchisement of Women from 1851 they explicitly called for votes for women.
Between 1865 and 1868 Mill used his position in Parliament to introduce amendments to improve the political lot of women, most famously in 1867 when he proposed that “man” be replaced with “person” in the Reform Bill. He failed, but the historic attempt is generally recognized as the first time women’s suffrage was formally debated in the House of Commons. Yet it wasn’t a total loss. The debate forced opponents to publicly defend their positions, which exposed their reasoning to scrutiny and put pressure on arguments about “natural” incapacity.
Two years later, he argued in The Subjection of Women that “The legal subordination of one sex to the other is wrong in itself, and now one of the chief hindrances to human improvement; and that it ought to be replaced by a principle of perfect equality, admitting no power or privilege on the one side, nor disability on the other.” The book sold well and quickly reached readers across Europe and America, but critics accused Mill of being under Taylor's spell and of pushing abstract philosophy unsuited to practical life. It took another fifty years for women's suffrage to be codified in law.
Contest and contrast
The suffrage debate reminds us that humans are fallible creatures. Without an ongoing process of contestation, errors may stay in place indefinitely. It is fortunate that minority opinions, especially those which are deemed unpopular like Mill’s first wave feminism, can eventually overturn dominant views. But growth requires conditions where judgments can be tested. If some people are withheld basic liberties then assertions about their nature rest on tainted evidence, just as every view shielded from criticism sits on shaky foundations.
Today, we too often forget that truth flows from challenge. Life is certainly much simpler when we agree with each other, but it’s also more prone to illusory thinking. Proponents of measures to bolster online safety argue that Mill’s ideas are out of date for a world with five billion internet users and AI-powered misinformation campaigns, that the harm of falsehoods outweighs the benefit of contest. They worry whether the deliberative reasoning that Mill's truth-seeking requires is possible given our information-dense online lives.
But this gets Mill backwards. The denser the information environment, the more essential the clash of views becomes. If judgment is scarce, all the more reason to cultivate it. A contestation process, properly designed, actually conserves rather than depletes cognitive resources by helping people develop better judgment over time. Insulating ourselves from our peers denies us the challenge needed to change our beliefs for the better.
An AI system is not “truth-seeking” because it serves us an answer, whether the response is correct or not. The search for truth is personal and provisional, a negotiation that each of us performs ourselves. But it’s also a fundamentally social process, one that requires institutional arrangements that facilitate productive disagreement. As Mill’s own writing on the position of women proves, humans make mistakes and need to course-correct. To do that we must hear views that don’t mesh with our own, to reckon with our reasoning and become wiser for it.
Few would argue against the importance of “truth-seeking” for human flourishing, but what exactly does it mean to seek truth? Is it the process of aligning thought with reality? Or is it creating and testing hypotheses, a process that sees conjectures survive until better alternatives are found? Maybe truth-seeking is recognizing what works in practice over the long run or building internally consistent webs of beliefs that grow stronger through mutual support. Mill might say that truth-seeking is the creation of conditions under which our errors can be exposed and corrected. It is less a destination than a discipline, a willingness to test our convictions against evidence, against argument, and against lives different from our own.
As he famously put it: “it is owing to a quality of the human mind, the source of everything respectable in man either as an intellectual or as a moral being, namely, that his errors are corrigible. He is capable of rectifying his mistakes, by discussion and experience. Not by experience alone. There must be discussion, to show how experience is to be interpreted.”
Mill wrote those words in On Liberty, his timeless meditation on the limits of authority and the claims of individual freedom. In Chapter Two, he introduces what some thinkers refer to as Mill’s Trident. The Trident holds that (a) if an opinion is true, silencing it robs us of truth; (b) if an opinion is partly true, silencing it deprives us of knowledge we lacked; and (c) if an opinion is false, silencing it still harms us because answering objections strengthens understanding.
Implicit in this framing is the idea that suppressing an opinion assumes truth is settled and safe in the hands of authority, while allowing contest assumes truth is provisional and discovered through argument. It's the same belief that animated Mill’s work on the rights of women: that truth requires removing systematic barriers to both voice and experience.
If you hold a belief, it feels obvious and self-evident to you. Your reasons line up, your evidence seems solid, and your instincts tell you it must be right. The trouble is you’re also the worst person to spot a mistake in your reasoning. When you construct an argument for something you already believe, you unconsciously select the evidence that fits, overlook what doesn’t, and construct arguments in ways that make sense to you but might not to others. While we retain the capacity for self-correction, we systematically struggle to see our own blind spots.
That isn’t to say that challenge is inherently good. False beliefs can become more coherent, better defended, and more persuasive after being put to the question. And some ideas spread by suppressing criticism and doubt. That’s clearly worrisome if the belief is harmful, but fortunately false beliefs tend to collapse when faced with a long enough time horizon. They can survive for a while – sometimes for centuries – but their weaknesses accumulate as new evidence and arguments pile up. Truth, by contrast, has a habit of surviving contact with skepticism. Newton’s mechanics didn’t collapse when tested; they were refined. The germ theory of disease didn’t vanish when attacked; it gained credibility because it answered them.
But survival alone isn’t the whole story. A belief can be broadly true and still be held in a shallow way if it has never faced criticism. Contest helps you tease out which reasons are essential for your position by showing you where the claim applies and where it breaks (Newton remains superb within everyday scales, while relativity tells you where it falls down). True understanding often emerges when we are faced with counter-arguments from someone who strongly believes we are mistaken, a person whose passion brings with it reasons that persuade in practice. In these moments we are forced to see our position from the outside, to reflect on our assumptions and come to terms with criticism.
Truth-seeking machines
Truth-seeking is personal. It’s a process by which individuals test their convictions and form their own beliefs. But it’s also a social phenomenon enacted through the clash of perspectives, the exchange of reasons, and the friction of experiences that no one person can generate. Seen this way, truth-seeking is a dynamic order fed by the ready circulation of challenge and reply. The health of that ecosystem turns on how widely and how freely those exchanges can flow.
To assess whether an AI is truth-seeking, we ought to ask whether it nourishes or constricts this ecosystem. One path is to build systems that freeze truth into doctrine, an oracle-like authority whose answers are beyond scrutiny. Such systems change the terrain of inquiry by reshaping what questions are surfaced. The other path is to build systems that welcome a clash of opinions, where knowledge grows through contestation, perspectives sharpen one another, and users remain active participants in the search for truth.
An attraction of the former, the centralized model, is that it promises stability. In a world awash with noise, the idea of a single system that sorts fact from falsehood has obvious appeal. It offers clarity where there is confusion, authority where there is doubt, and speed where deliberation can feel slow. For governments worried about the vitality of the information environment, or platforms anxious about liability, an oracle-like AI looks like an easy way to provide epistemic security.
We can already see what this model looks like in practice. AI systems trained for content moderation are designed to sort speech into the permissible and the impermissible, often on opaque grounds that users cannot interrogate. They scan posts, flag those that violate content policies, and remove them from circulation. If a user appeals, the decision moves up the chain to a human moderator who checks the flagged content against a written policy. The rub is that key questions – what counts as harm, which claims are debatable, which lines of inquiry are too risky – are settled in advance by a policy written elsewhere. Users may dispute a verdict, but they cannot contest the boundaries in which a decision is made.
The decentralized model, like X’s Community Notes feature, offers an alternative. Here, instead of a central authority deciding what counts as misinformation, users collectively add context that appears underneath contentious posts. Notes rise or fall in visibility depending on whether people from different viewpoints rate them as helpful. The idea is neat in principle, but in practice notes can take time to appear, don’t cover all posts, and may never appear if no consensus is reached. It may also be the case that, in finding a “common ground,” this kind of solution consistently marginalizes more outlandish positions.
Criticisms of the decentralized model mirror the objections to Mill’s own views about open contest for truth-seeking. The most forceful objection is that the harm of falsehoods can outweigh the benefit of contest. Misinformation spreads faster than corrections, and in domains like health or elections the damage can be immediate or irreversible. Why tolerate the circulation of claims about bogus medical advice or stolen ballots if lives or democratic trust are on the line?
Mill heard versions of this in his own time: that liberty of discussion endangered social order, that error corrupted the uneducated, that some questions were too dangerous to debate. His answer that harm was real but the cure was worse, that falsehoods become harder to dislodge without exposure and rebuttal. A claim that circulates in the open can be met, tested, and discredited; one that resides underground becomes harder to track and easier to mythologize. In algorithmic environments, however, false claims can circulate in the open while being immune to effective rebuttal due to engagement optimization.
That raises the stakes of Mill’s wager. These systems amplify the speed and reach of falsehood, which makes harms harder to contain. But they also expand the reach of correction by allowing rebuttals to travel further and faster than humans could manage. The question is whether we design these systems to dampen contest – removing disputed claims before they circulate – or to stage it in ways that surface stronger counterarguments. In the first case, AI becomes a mechanism of closure, narrowing the range of what can be said. In the second, it can become a mechanism of exposure that draws disagreements into the open and gives users the tools to navigate them.
Another vein of criticism suggests that Mill assumes a level playing field. Even at the time of writing, critics argued that his vision of open contest was naive because the majority of people lacked the education to participate meaningfully. Much like today, skeptics worried that liberty of discussion would empower demagogues to exploit the ignorance of the many while leaving the terms of debate set by the few. Then, detractors argued that the uneducated needed protection from dangerous ideas; now, the concern is that ordinary users will be misled by misinformation seen on social media platforms. In both cases the instinct is to remove material from view for the user’s own good.
The problem is that protecting people from arguments makes them less able to refute bad ideas when they inevitably resurface. Mill believed exposure sharpens reason, but others say that this account underestimates the limited nature of human attention and discipline. In his time, the charge was that most people lacked the education to weigh arguments properly, while today we fret that we simply don’t have the cognitive bandwidth in a world rich in information.
Contemporary commentators are correct that humans have only a narrow capacity for judgment, and that engagement-driven algorithms seem to systematically reward inflammatory content over careful reasoning. But that is exactly why this faculty must be exercised, and why we need institutional arrangements that are commensurate with Mill’s contestation process. Shielding us from difficult or misleading arguments may feel noble, but doing so weakens our capacity for discernment. Only by grappling with claims that stretch, unsettle, or even mislead us do we sharpen the skills required to change our minds.
AI already mediates 20 per cent of our waking hours. The systems we build will have the power to transmit arguments and decide which ones are ever heard. They can be designed to harden truth into doctrine, presenting answers as final and beyond scrutiny. Or they can be built to keep disagreements visible to let contest play out in ways that sharpen judgment. AI for truth-seeking ought to broaden the range of views in circulation and give people the tools to test them. It would make friction easier to encounter rather than harder to avoid.
None of this guarantees that error vanishes or that judgment becomes effortless. It means only that the conditions for self-correction remain intact. Then as now, the response is the same: your curiosity and judgment remain the most powerful force for truth. AI doesn’t seek truth, people do.
Cosmos Institute is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund fast prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.
We have found a way to mathematically determine the veracity of information and have developed a fundamentally new algorithm that does not require the use of cryptographic certificates of states and corporations, voting tokens that can bribe any user, or hallucinating artificial intelligence algorithms that learn from each other to get better and better at falsifying all kinds of content. The algorithm does not require external administration, review by experts or special content curators.
We have found a unique and very unusual combination of mathematics, psychology and game theory and have developed a purely mathematical international multilingual correlation algorithm that allows us to get a deeper scientometric assessment of the accuracy and reliability of information sources compared to the PageRank algorithm or the Hirsch index.
At a very fundamental level, it revolutionizes existing information technologies, making a turn from information verification to the world's first algorithm for analyzing its compliance with Popper's falsifiability criterion, which is the foundation of the scientific method.
Brief description and essence of the project — https://drive.google.com/file/d/18OfMG7PI3FvTIRh_PseIlNnR8ccD4bwM
Article about the project and evidence-based analysis — https://www.lesswrong.com/posts/YtCQmiD82tdqDkSSw/cybereconomy-the-limits-to-growth-1