Can Old Ideas Survive the AI Age?
Your questions answered: on philosophy, children, and China
Last week, we marked 20,000 subscribers by opening the floor. Your questions were often genuinely challenging. Thank you to everyone who took the time.
So you don’t have to trawl through the comments, we’ve compiled all the responses in one place. Apologies to anyone whose question didn’t make the cut. There were a few I’m still chewing over and we’ll revisit many of these topics at greater length in future essays. Let us know in the comments if this is a format you’d like us to repeat.
Anna Lisa asks:
Are there specific non-academic experiences or “containers” of formation that you think hold a lot of promise? (e.g. ones that were meaningful to you or ones that you seek out for yourself/your children?)
As a dad, I love this question.
A lot of the important formation happens in places that do not look educational at all, and are not primarily about instruction. They are about habituation, responsibility, emulation, and contact with reality.
The first was the household I grew up in. My mother was a Catholic conservative historian from a military family who taught special needs kids for 36 years. My father was a left-leaning physicist turned environmental lawyer, and a pacifist. They agreed on almost nothing politically. But they agreed on something deeper: that you should care about something beyond yourself and that how you act matters more than what you know (and that knowing is bound up in doing!). That gave me, without anyone naming it, a kind of virtue culture, and I think it’s the kind of thing that’s very hard to manufacture deliberately but very obvious in its absence. For children, what seems to matter is not ideological uniformity or even ideological sophistication, but a home in which seriousness, duty, and moral aspiration are normal.
The second was the submarine. I spent 610 days underwater, including under ice. In a steel tube, you do not get to opt out of reality. You can’t leave, and your mistakes could get someone killed. That kind of environment forms you because it imposes standards that are not negotiable. It teaches service, competence, and mutual reliance in a way that is hard to simulate. I think containers of formation are often places with real stakes, shared discipline, and demands that do not bend to your preferences.
The third, and probably deepest, has been fatherhood. I had two kids and sold two companies in close succession, and while both changed my life, the children changed it more. I remember sitting in the corner of my room one night after putting Arden and Pierce down and asking myself whether I could write what I believed on a single sheet of paper. I couldn’t. That was the moment I started reading seriously, beginning with the ancients, who thought about these questions deeply and genuinely (my original list is here). Having children makes the question of what you’re actually “for” impossible to defer.
I now look for opportunities in my own children’s lives for containers that place them in contact with reality, responsibility, and admirable adults. The hard part is that the best formation is often a byproduct rather than something you can engineer directly. You can build the conditions for it, but you usually cannot force it.
Turner Halle asks:
You argue that philosopher-builders need explicit moral commitments to avoid optimizing for the wrong things. But your three pillars (truth-seeking, autonomy, decentralization) are themselves a normative framework that not everyone shares. China’s AI strategy is still coherent, explicit, and philosophical, it just starts from different premises. So how do you argue for your philosophy without just replacing one set of defaults with another? What makes Cosmos’s values the right foundation rather than just a well-packaged preference?
Hi Turner, you’re right that truth-seeking, autonomy, and decentralization are substantive commitments. I think they matter less as one moral doctrine than as conditions that keep moral life from collapsing into force or drift.
If you consider moral frameworks from Confucianism to Christianity to Marxism, for them to have legitimate force over a person, that person has to be able to genuinely endorse it (otherwise, you don’t have a moral commitment). That endorsement depends on human autonomy – which is to say, the capacity to reflect, evaluate, and take something on as your own rather than merely inheriting it or obeying it. So autonomy is not just one preference among others. It is the deep substratum that makes moral commitment possible at all.
Take utilitarianism: Jeremy Bentham devised a system that, in my mind, dissolves individual judgment into aggregate utility. This is in conflict with autonomy-as-an-end. And yet building this system was itself a radical exercise of autonomous reason. Every person who adopts utilitarianism is exercising the same capacity. You can’t be a utilitarian in any meaningful sense unless you’ve freely taken it on. So even a framework that subordinates individual judgment to aggregate welfare requires individual judgment to get off the ground. Now, someone could say that only makes autonomy instrumentally necessary, not foundationally important. I think that view is unstable, because the goods autonomy is supposedly serving only become moral goods for a person if they can in some real sense take them on as their own.
There are hard cases. In the Ash’ari tradition in Islamic theology, divine command constitutes moral value rather than being something reason independently discovers and then endorses. That’s a genuine challenge to autonomy as foundational. But even there, the person who freely chooses submission is doing something categorically different from the person who never had the choice. And a secular collectivist can make a parallel argument: that harmony or collective flourishing is the true precondition, because no individual life goes well outside a stable social order. I think that is partly right. But unless people can participate in judging the terms of that order, harmony becomes coordination imposed on them rather than a good they share in shaping.
Truth-seeking has a similar status. Any framework worthy of allegiance has to remain in contact with reality. It has to be open, at least in principle, to its own refutation. If someone could show me that decentralization produces a worse outcome in a domain I care about, I’d have to take that seriously and I would. Systems that suppress truth-seeking can be internally coherent in the way that closed systems are coherent. But they can’t tolerate the mechanisms that would let them find out they are mistaken. That’s a serious defect, especially if you think we’re all operating under real uncertainty about what AI is going to do to human life.
And decentralization follows from the same logic at the institutional level. If no person or committee is wise enough to determine the good for everyone, then we should be wary of a small number of actors hard-coding their anthropology into the substrate of society. Decentralization is valuable because it preserves room for people to try things, get them wrong, and leave, keeping mistakes from becoming total.
So to your question about China: yes, their AI strategy is coherent, explicit, and philosophical. But coherence purchased by foreclosing the capacity for self-correction is brittle in exactly the way that matters most right now, though that is still a bet, not a proof. I’m also skeptical that it preserves the standing of human beings as agents capable of judgment rather than increasingly treating them as objects of coordination. A framework can be philosophically serious and still be wrong about what a person is.
So I would not say Cosmos is advancing a one true final doctrine that every civilization must affirm. That would be too strong, and it would collapse into exactly the kind of totalizing move you are warning against. I would say instead that Cosmos is trying to defend the conditions under which free people can genuinely seek truth, make judgments, form commitments, and build different kinds of lives together, without having orthodoxy imposed on them by default.
Salvador Duarte asks:
Will the Cosmos Grants ever open again?
We’re planning to re-open these in the next 60 days. We’ve just had the demos from our latest batches of winners. Stay tuned for updates!
Bert Clements asks:
Assuming frontier large language models, together with their multimodal and agentic extensions, are trained to effective saturation on an exhaustive corpus that represents the totality of digitized human knowledge including all scientific publications, books, patents, archival records, cultural artifacts, and recorded conversations, will these systems be capable of transcending the statistical manifold of their training distribution to autonomously discover, validate, and iteratively expand novel knowledge beyond the current human frontier?
I’m not sure scientific knowledge is a kind of territory where data defines a bounded region that a model may or may not be able to venture beyond. The affirmative view of this picture strikes me as broadly empiricist insofar as knowledge comes from data, and scientists make discoveries by extrapolating beyond what they already know. Your specific examples, though, are actually the strongest case for the affirmative: theorem provers, simulators, multi-agent workflows, and verifiable rewards are exactly the kinds of feedback-rich settings where I would expect systems to extend the frontier.
But that is not the only picture of science, and I do not think it is the deepest one. Science also advances by reorganizing what researchers take to be meaningful in the first place: which anomalies matter, which questions are worth asking, and which explanations count as illuminating rather than merely predictive.
Systems can already generate novel candidate hypotheses, and in domains with strong automated verification, they may well extend the frontier. Formal mathematics looks especially promising, because conjecture can be paired with proof or disproof inside a relatively crisp evaluative architecture. In such cases, I expect AI systems to produce results that are genuinely new to humanity. Just this week a constellation of agents improved a math problem that’s been open since Newton (Kissing Number in dimension 11: 593 → 604). That is impressive. It is also, I think, a good example of the distinction I’m drawing: a real extension of an existing line of inquiry, but still closer to powerful normal science than to scientific revolution.
But that does not settle the larger question. There is a difference between producing novelty within an existing framework and generating a new framework altogether. A system may help prove a theorem, optimize a search, or identify that drug X affects disease Y, all without altering our understanding of why the problem is structured as it is. That is a real scientific contribution, but it is not reorganizing the conceptual landscape.
The harder question is whether these systems can exercise scientific judgment in the richer sense: whether they can tell which anomalies are significant, which inconsistencies are fertile, which explanations deepen understanding rather than merely extend prediction, and which questions are worth reorganizing inquiry around. That is a higher bar than novelty, and I am not yet convinced we know how to evaluate it well. Part of what makes this hard is that frameworks are underdetermined by data. The same body of results can often support multiple lines of inquiry, and judgment is what tells you which one is worth building a field around. That remains, to my mind, the deeper open question.
tappert asks:
Most of the current work on ‘AI, collective epistemic structures and decision-making’ focuses on filling gaps: more participants, faster information exchange, more efficient decision-making. This will help with many problems, but certainly not with the most complex ones, because it just accelerates the practical execution of the same thought styles that led to the problems. Therefore: How can we use future AI to foster new thought styles that are currently not supported by our existing social structures?
Yes, I think the intuition that better collective decisions will emerge if we simply gather more data from more people more efficiently breaks down at the limit. That can improve performance within an existing paradigm, but it does much less when the paradigm itself is the problem.
What groups develop over time are not just bodies of knowledge, but epistemic constitutions: implicit rules about what counts as evidence, which questions are legitimate, who gets to propose, who gets to criticize, and on what terms. Mill saw part of this in his account of the tyranny of prevailing opinion and the epistemic importance of dissent. But the problem runs deeper than opinion alone. Entire institutions decide in advance what counts as serious thought.
So one promising use of AI would be to make those constitutions more visible. A good system might show a research community, an organization, or a polity where its methods systematically exclude certain questions, place some assumptions beyond criticism, or discount certain voices before the argument even begins. In medicine, for example, it might reveal a field that privileges what is easily measurable while sidelining patient testimony or long-horizon effects that do not fit the dominant method.
But diagnosis is only the beginning. I like this direction because the problem is often not that new thought styles do not exist. It is that they remain stranded at the margins because the reigning structures of legitimacy suppress them. And sometimes the deeper problem is that the social conditions required for a new thought style have not yet been built. New thought styles need protected spaces, alternative standards, and enough provisional legitimacy to develop before the dominant paradigm dismisses them. In that case, the most useful contribution AI could make to collective epistemics is not novelty on demand, but widening the space in which criticism, recombination, and intellectual minority formation can occur.
Thomas Yiu asks:
What is your definition of intelligence? When AI reaches ASI in the future, do you think it will be safe and aligned? As a species, what is our purpose after ASI world? How can thrive as a species?
I’d resist the standard definition of intelligence as raw problem-solving horsepower. For me, intelligence is the capacity to learn from reality, inquire into it well, and let it correct you.
Part of what I like about François Chollet’s work, and why ARC Prize has mattered, is the insistence that intelligence is not the same thing as accumulated skill. A system can look impressive because it has absorbed an enormous amount, or because the task has been made easy for it. The more interesting question is how much it can learn from limited experience, under real constraints, and still generalize well.
But I do not think that is enough on its own. Leslie Valiant’s idea of educability gets closer to the human picture (see his Cosmos Lecture from last year here). Human intelligence includes the capacity to learn from experience, receive instruction, integrate both, and apply them in new circumstances. What distinguishes the human mind is not only that it learns, but that it can be taught and formed.
And I would add one more layer. Drawing on HAI Lab director Philipp Koralus, I think reasoning is fundamentally question-directed. Minds are shaped by the questions they pursue. They go wrong through shallow questions, premature closure, and a failure to inquire far enough, just as much as through false conclusions. That matters for AI because a system can become very good at answering questions while still narrowing the range of questions humans ask, or rewarding closure where inquiry ought to stay open.
That is why I’m less interested in arguing about whether AI will count as “superintelligent” than in asking what it does to human intelligence. A system can be extraordinarily capable and still erode our capacity for inquiry, judgment, and self-government. That is the danger I worry about most.
On whether ASI will be safe and aligned: I do not assume that can be taken for granted. I would trust highly capable systems only to the extent that they remain corrigible, contestable, and embedded in institutions that preserve human judgment rather than replacing it. The problem is not just getting the objective right once. It is making sure people can still question, revise, and refuse the system’s guidance when it matters most.
As for human purpose after ASI, I do not think our purpose changes. If anything, it comes into clearer view. We are not here to compete with machines at speed or scale. We are here to exercise judgment, form character, build institutions, love particular people, and deliberate about the good. That last point matters more than it may sound. Love is not interchangeable, and responsibility is not abstract. A more powerful machine does not make our obligations to particular human beings less central. In a world of highly capable AI, those things become even more important.
Todd Enkhbat asks:
Is it possible to carry on our learning from humanity up until now and jumpstart a new society with the help of AI, assuming that we can concentrate and utilize all the data we accumulated up until now? At what point does the need for a new constitution or a new world order arise and how do we know it?
In short, no.
Firstly, I don’t think “all the data we accumulated up until now” is the same thing as the total weight of human knowledge. Much of the knowledge that keeps a society functioning is tacit, dispersed, and unwritten. Some of it lives in practiced judgment: an ICU nurse sensing that a patient is about to crash before the monitor shows it. Some of it lives in inherited forms: the habits of trust, restraint, and association on which a free society depends, even when no one can fully specify them. As Michael Polanyi put it, we know more than we can tell.
More importantly, I’d push back on the idea that we can jumpstart a society at all. Societies aren’t machines that you design to a blueprint. Tocqueville saw this in the institutions of local self-government. Hayek saw it in the way social orders carry dispersed knowledge that no planner can gather in full. A free society is learned in practice through things like townships, juries, churches, and associations. Those are the ordinary disciplines by which people become capable of governing themselves.
The question is whether our institutions can still sustain a free people capable of self-government under new technological conditions. And that does leave open the question you raise about constitutional inadequacy: how do we know when inherited arrangements are no longer enough? I do not think there is a clean threshold. Usually the signs are visible first in practice, when institutions that once formed judgment begin producing passivity, dependence, or elite insulation instead.
When they cannot, the answer is not a tabula rasa redesign of “the new world order.” I would look to renewal through institution-building, and Benjamin Franklin is the example I keep returning to. He took an Enlightenment conviction — that access to knowledge should not remain under the custody of church, state, or a narrow elite — and embodied it in an institution. The subscription library made a philosophy of freedom socially real. That is why Franklin still matters to me here. He shows what it looks like to translate a philosophy into civic machinery. We need the AI-age equivalent: institutions that widen access to knowledge and judgment without concentrating them in a few hands. We need philosopher-builders in that spirit again.
Miss Zanarkand asks:
How can we motivate our children to learn at school? Should we try to motivate them or find rather a way out of the system? (e.g. reading more classical books, rather than encouraging them to read what school nowadays gives?)
Young people have a natural longing to be seized by something greater than themselves. To be captivated. The promise of liberal education, going back to the Greeks, is that there are magnificent ways of living, and magnificent questions about how to live, and that encountering them through great minds and great books can awaken a desire that organizes everything else.
The disaster of modern education is that it has taught young people their longing is naive. That no book is really better than another, that no life is really higher than another, and that the hunger to be drawn upward by something extraordinary is itself a kind of error.
So I would say motivation is the right place to focus, but we should be precise about what we mean. There is a kind of motivation that is intrinsic: the eros I just described, the desire to encounter greatness because it calls to something real inside you. And there is extrinsic motivation: incentives, structure, well-designed systems that make it easier to do the work. Both matter. The best schools I’ve seen, including Alpha where my kids go, are serious about the extrinsic architecture. They’ve built an environment where children actually want to show up and work.
Extrinsic design clears the path, but then you have to light the fire. The fire is eros, and it’s fed by contact with things worthy of love: books, questions, lives, guides who still care about these things enough to take them seriously in front of children.
Whether that happens at school or at home is incidental. What matters is that a child sees adults who are genuinely stirred by ideas, who return to certain books not because they were assigned but because they can’t leave them alone. A six-year-old can learn a lot about what seriousness looks like by watching someone practice it.
Eugene Yiga asks:
The accelerationist world still seems to dominate the public narrative by communicating in everyday language on everyday platforms in a way that meets people where they actually are. Meanwhile, even the most accessible AI ethics content tends to assume familiarity with Mill, Tocqueville, or Heidegger. The philosopher-builder framing is compelling to people already inside the tent. How does Cosmos think about the people outside it? Is philosophical depth a feature for the community you’re building, or a barrier to the broader cultural shift you want to see?
The honest answer is that depth is the point. If we watered down the philosophy so we could meet everyone where they are, we’d be producing the same frictionless content you see elsewhere. Philosophical seriousness creates a negative selection gradient, and we want that. The people who do the reading are the people most likely to build something different.
But “depth” and “jargon” aren’t the same thing. A lot of AI ethics writing assumes you’ve already read Heidegger or whomever, which risks filtering out precisely the builders who might be transformed by reading him. I know this because I’ve made the mistake myself. When I started writing this Substack I leaned on more jargon than I needed to, and I’ve had to learn over time how to make the ideas more accessible without making them thinner.
The people outside the tent aren’t who you might think. I sold two companies and wrote a national AI strategy, and I couldn’t write what I believed on a single sheet of paper. There are a lot of capable builders out there who never had anyone hand them the books or sit with them through the hard parts. Cosmos partly exists because I was one of them. The audience for this is bigger than it looks.
Where I’d push back on your framing is the implicit suggestion that the accelerationists win because they’re more accessible. They have their own jargon. Try reading about negentropy, Kardashev III, and thermodynamic civilizational substrate for the first time. What they’ve done well is compress a real conceptual core into memes that travel. I respect that.
The challenge for us is that some ideas compress more easily than others. “Build faster” is more memeable than “cultivate judgment.” “Technology goes up” fits on a poster. “The conditions under which free people can exercise genuine choice require institutional renewal” does not.
This logic holds for political movements more generally: the larger the audience you try to build, the cruder the message has to become. The lowest common denominator wins by default, not because it’s right but because it compresses. I don’t think the answer is to compete on that terrain. I think it’s to make the longer argument compelling enough that people seek it out, and to be honest that not everyone will.
The harder truth is that we live in a culture of secondary orality where the long coherent essay is increasingly marginal. That’s a loss. It makes what we do at Cosmos more countercultural than it would have been fifty years ago, but it also makes it more necessary. The essay, the book, the salon: these are the forms where ideas actually get tested rather than just transmitted. We’re not going to stop producing them because the culture has moved on. If anything, the fact that sustained argument is now unusual is exactly why it matters.
Emily Kittley asks:
For someone coming to AI without a technical background but with a strong interest in understanding its societal and philosophical implications, what foundational books or resources would you recommend?
Second, as a parent, I’m thinking about how to prepare my kids for a world where AI is increasingly embedded in everyday life. Beyond basic digital literacy, what kinds of skills, habits, or ways of thinking do you believe will matter most for the next generation? Are there age-appropriate tools or frameworks you’d recommend for introducing AI concepts early in a thoughtful, not just utilitarian, way?
Hi Emily :)
I’ll take the kids question first because it’s closer to my heart.
The risk I think about most is what I’ve called “autocomplete for life”: the possibility that AI systems will increasingly shape not just what our children do but how they deliberate about what’s worth doing. Each small delegation of judgment seems harmless. But together, they habituate a person away from self-governance and toward dependence. The question for parents is how you build resistance to that drift before your child is old enough to name it.
Our ancestors needed to know how to make bread. We need to know where to find the recipe. The next generation will need something different again: the capacity to think about how they think, in relation to systems that could do the thinking for them.
In our household, the main way we work on this is Socratic conversation. Arden and Pierce do weekly sessions with Michael Strong built entirely around questions. “What’s the difference between a bird and a plane?” “What does it mean for something to be alive?” “When mommy and daddy disagree, who is right? What about daddy vs. AI? What about AI vs. AI?” A child who has practiced working out what they believe, and who has had to think about whether to trust their own judgment or defer to an external authority, is better prepared for a world of algorithmic suggestion than a child who has learned to code.
I also want my kids to be entrepreneurial. When America was founded, around 80% of free workers were self-employed on farms or in small crafts. Today that number is about 10%. We became a society of employees, and something atrophied. As the economy changes again, the ability to know yourself, act on what you believe, and build something from that conviction will matter more than any technical skill we could teach them now.
On resources for someone coming to AI without a technical background: I’d start with the question of what AI does to us rather than how AI works. A couple of recent pieces that I’d recommend are Séb Krier’s Musings on Self-Recursive Improvement and Alex Imas’s What Will Be Scarce. For ongoing reading, Jack Clark, Azeem Azhar, and Ethan Mollick regularly write about AI and society. Jasmine Sun and Henrik Karlsson have a wider aperture and I often find them thought-provoking. For anyone interested in AI’s effects on democracy and self-governance, Harvey Mansfield’s Tocqueville: A Very Short Introduction is the best ~100 pages you could spend. Tocqueville saw the drift toward comfortable dependence coming two centuries ago. The application to AI is left to the reader, but it isn’t hard to find.
Substack Joe asks:
My sense is that the vision animating Cosmos has deep predecessors not just in classical philosophy but, in my impression, religious eschatology. Teilhard de Chardin’s Omega Point or Augustine’s City of God, and even secular variants like Condorcet’s perfectibilism all share your orientation toward civilizational-scale transformation in service of human flourishing.
More explicitly, your pillars of reason, autonomy, and decentralization also echo the long Aristotelian and classical liberal tradition from Mill to Tocqueville.
So, what does Cosmos contribute that is genuinely novel in its normative architecture, rather than a restatement of those traditions in the presence of AI? And if it is largely a restatement, is that a problem?
I think you’re closer to the mark with some of these influences than others.
Teilhard, Augustine, Condorcet: I share their impulse toward civilizational-scale thinking, and I take it seriously. But for all their differences, they are ultimately teleological writers. They saw history as the unfolding of a determined, directional arc. At Cosmos, we want to keep the conditions open that allow people to find their own path. We’re not about to get into eschatology.
You are, of course, completely right about Aristotle, Mill, and Tocqueville, and we regularly acknowledge our intellectual debt to them. I don’t think the pillars need to be new to be worth defending, and I’d be suspicious of anyone claiming to have invented a wholly new account of human flourishing in 2026.
For me, the interesting question isn’t whether Cosmos has discovered a value nobody thought of before. Instead, it’s whether an old set of commitments can survive as a living practice. Mill didn’t have to ask whether the harm principle could be encoded in a model’s training objective. Tocqueville didn’t have to think about what decentralization looks like when the substrate is compute rather than townships, when the everyday infrastructure of life anticipates your choices rather than forcing you to deliberate, associate, and decide alongside your neighbors. When your community is mediated by algorithmic curation and your civic life is shaped by systems you never consented to and cannot inspect, the Tocquevillian question of how free people learn to govern themselves together doesn’t disappear. It becomes harder, and the institutional forms it requires don’t exist yet.
That’s where your last point lands, and I think it’s the right one. The proudest achievement of the eighteenth century was the translation of philosophy into law: Enlightenment commitments about liberty, consent, and the rights of individuals became encoded in constitutions and legal systems that gave them institutional force. The challenge of the twenty-first century is the translation of philosophy into code. The commitments are old. The work of making them operative in the infrastructure that actually governs daily life is new, and it is the work Cosmos exists to do.
But I wouldn’t call what we’re doing a restatement. Restatement is what you do in a seminar. Institutional embodiment is what you do when you think the ideas actually matter and must be operative in the AI age.
Thomas Dias asks:
What do you think of the prospects for a stable, left-right coalition on AI in favor of sensible regulation and general cautious optimism that includes religious conservatives and secular social democrats? Or will this get polarized across political lines like everything else?
On the coalition point, I can already see signs of this. Religious conservatives and secular social democrats agree on little, but they intuitively grasp some things that many accelerationists don’t: that people are formed by their communities, work and dignity are connected, and that we shouldn’t try to optimize society into passivity. I’d also throw old school liberals into that coalition too. In the coming years, I’m sure there’ll be scope for productive, broad-based conversations about kids, loneliness, work, and communities.
Where I’d push back is the idea that any future coalition should coalesce around “sensible regulation.” I don’t think regulation is the best tool for addressing most of these concerns. Treating it as the default is how you end up with something like the EU AI Act, a classic example of doctor-induced illness. It created a compliance moat that only the largest companies can afford to cross, while doing essentially nothing to address the risks it was supposed to mitigate.
The more productive ground is further upstream. What are we building? What do we fund? What should we teach? What institutions do we need to form? A coalition focused on those questions would look less like a regulatory body and more like a network of individuals doing the building, teaching, and funding that no regulation can mandate.
I’m less worried about polarization acting as an obstacle here. Much of this work sits outside electoral politics at the moment, and as far as I’m concerned the longer that remains the case the better. Partisan dynamics reward exactly the kind of simplification that makes these questions worse. The moment AI becomes a left-right issue, the entire conversation becomes about how much to regulate, and the question of what to build for never gets asked.
Alina asks:
Here is my question: Your three pillars (truth-seeking, autonomy, and decentralisation) are compelling at the individual level. I am curious how you think about them when the actors are states rather than individuals. The US-China AI dynamic, for instance, seems to run against all three: opacity rather than truth-seeking, control rather than autonomy, and concentration rather than decentralisation. Does Cosmos’s framework extend to the question of how countries could potentially cooperate on AI, or does that require a different philosophical foundation entirely?
Thanks Alina, great question.
The pillars were designed with individuals and institutions in mind, so extending them to the state level requires real philosophical work.
Fichte took the Kantian account of individual autonomy and argued that it applied to nations: a people that cannot determine its own form of life is unfree in the same sense an individual under tutelage is unfree. The autonomy pillar, taken seriously, has a national analogue. So does truth-seeking: a polity that can’t inquire openly into its own condition is in the same trap as a closed mind. And so does decentralization: a world of self-governing peoples is the international expression of the same instinct that makes you wary of concentrated power inside a country.
But Fichte also shows you what happens when you scale autonomy alone. His attempt to extend individual self-determination to the collective ended in arguments for the unique world-historical mission of the German nation, an autarkic closed state, and the exclusion of those who didn’t fit the national community. The lesson isn’t just “be careful.” It’s that the three pillars need to travel together. Autonomy without truth-seeking becomes self-righteousness. Autonomy without decentralization becomes domination. What checks national self-determination is the same thing that checks individual self-determination: openness to correction and the refusal to concentrate power beyond what can be held accountable.
On US-China, the goal isn’t a single global regime that imposes one model of AI governance on everyone, because that would violate the decentralization commitment at the international scale. The better question is: what conditions allow distinct political communities to develop AI in line with their own forms of life without crushing each other in the process?
And what happens when a community’s “form of life” involves suppressing the autonomy of its own citizens? The pillars can come into tension here. Respect for national self-determination and respect for individual autonomy pull in opposite directions.
This is where Tocqueville matters most. The meaningful unit of self-government is rarely the nation-state on its own. It’s the dense layer of associations, communities, firms, religious groups, and local institutions that sit between the individual and the state. Any serious thinking about international AI governance has to make room for those middle layers. Tocqueville saw that democratic freedom doesn’t live in declarations from the center, but in the practice of self-government at the local and associational level.
Mark Frazier asks:
Can you set up a path for crowdfunding projects or contests to realize ideas that the Cosmos Institute seeds?
Interesting. Not something we’ve considered, but we’ll think about whether there’s a model that works for us.
George asks:
Do you plan to have online cohorts?
No plans right now, but we may consider it in the future!
Sarthak D asks:
I see all these wonderful essays and people doing great work. Honestly, I would love to interact with the community + become part of it in some capacity. Is there a channel where people who are interested in the ideas that Cosmos is working towards but not necessarily are academics or builders can communicate with the fellows and the team?
Not right now, but we are thinking about whether there’s something we can do here!
Kevin Cutright asks:
I’m persuaded by the concern about cognitive risks and the need for “AI for epistemics,” “deliberative AI,” etc. Do you know of organizations developing benchmarks around the goal of bolstering critical thinking and improving epistemic processes and outcomes?
We have some grant projects that have focused on this. Two that come to mind are DeliberationBench, which assesses AI persuasion in comparison with diverse human discussion, and Priori, a tool that surfaces hidden assumptions when you are interacting with an AI model. Two of our grantees (Steven Molotnikov and Cathy Fang) are running a research study on how Priori and related human oversight interfaces work in practice.
I think there is a wave of energy in this area. Various orgs are thinking more about AI for Human Reasoning (with Future of Life Foundation funding work in this area, Forethought writing about it, and Elicit working on directly in the for-profit space). Also anecdotally I hear researchers thinking more about ideas like “epistemic security” or “cognitive security” or “cognitive sovereignty” as well as ways to improve information environments without restricting speech and expression.
I share your enthusiasm for more work in this area – both on benchmarking but also technology that better enables open contestation of ideas (inspired by classical liberal premises, and Mill’s ideas on this). If readers are working on this please do reach out!
Cosmos Institute is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund AI prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.



Such a generous post that keeps on giving the more I read through it.
I am an artist builder. I have been exploring the idea that both artists and scientists share the same hard problems — to invent something novel. We are challenged to extend this to Collective understanding. And we only have our minds and now machine minds to pursue that process. My speculation is that the most value to be gained from this coming age of AI is to articulate the difference between living and artificial intelligence — that the best approach is to collaborate with AI as tools. What’s developing is a kind of cyborg relationship that should prioritize human experience, purpose, and value first. The information age is over, we are now entering the age of attention.