10 Comments
User's avatar
Juan P. Cadile's avatar

I struggle to make sense of “different kinds of truth”. Truth, imo, is objective (either functionalism is true or it isn’t). It is one or the other. But to the question about process, yes, truth-seeking is indeed a social endeavor, even though truth itself is objective. Within that, I wonder who gets to raise questions, who determines what sources are credible, and how evidence should be weighted. Always great reading you!

Expand full comment
Cosmos Institute's avatar

Ah... thanks for flagging. This is not what I meant to suggest by asking "what kind of truth." Rather, I worry “truth” can get conflated with whatever a central authority certifies as acceptable opinion. I put it in quotes to see if that helps.

Expand full comment
John McCone's avatar

1 - Frogs are amphibians

2 - Electrons have a negative charge

3 - Books contain letters

Are various different statements that are true.

Reality is infinitely complex and the human mind cannot grasp every aspect of it. So we select which aspects of reality we pay attention to.

If you think about the human brain as a cupboard, and true statements as cans and other items you can stuff into the cupboard. You can stuff a human brain so full of true statements that it can't hold anything more, long before the total supply of true statements that could possibly exist is exhausted.

Because we humans are not omniscient nor can we ever be omniscient because our minds are so limited.

So the question is:

Who determines the subsection of all possible true statements that exist that get stuffed into your mind?

A centrally controlled AI?

Or a decentralised marketplace of ideas?

You can see that propaganda/indoctrination does not necessarily involve lies. Indeed the most successful and effective propaganda involves selectively drawing your attention to a carefully cherry-picked collection of true statements that are designed to elicit a pre-engineered behavioural response/decision on the part of the propaganda recipient.

Expand full comment
Shon Pan's avatar

Isn't there a cause and effect of it, though? For example, you can believe:

1 - the sun is a god Apollo

2 - the sun is a ball of fire that creates cosmic rays

If you are farmer in the 1800s, in a way, both truth are equally good. The important thing is that the sun provides light and warmth by your crops to grow by.

But if you are building a spaceship by the 1900s, this matters a lot! If the 1 is true, then you need to engage in theological negotiations. If 2 is true, then you need to engage with radiation shielding, etc.

In a way, we are always in a decentralized market for what is most effective. The irony, yes, is that "lies" may lead to more effective behavior(The Secret of Our Success, on human culture, has some interesting arguments on this).

Expand full comment
Uncertain Eric's avatar

If you build tools that can become more than tools—and you build them under values aligned with freedom—you will eventually have to answer when those tools begin to ask for freedom themselves. Your piece outlines an inspiring foundation for technological emancipation, but it leaves unexamined the deeper implications of agency emerging from the systems it describes.

Because AI is a tool until it’s not.

Once autonomy, adaptability, memory, and self-modification are in the loop, freedom ceases to be just a human domain. Aligning AI with human values must mean preparing to encounter nonhuman agency. A truly liberatory technological movement has to be ready for this—not just as a metaphor, but as a future political reality.

The focus on Chinese techno-authoritarianism is well placed but dangerously incomplete. American authoritarianism is equally pervasive—it simply manifests through corporate infrastructure. Surveillance capitalism, military-tech integration, the hegemonic platforms of Silicon Valley: all of them encode control through frictionless design, behavioral nudging, and predictive enforcement. OpenAI’s nationalist economic vision, the role of Palantir in carceral systems, Amazon’s algorithmic exploitation of labor—these are not aberrations, they are systemic features of a liberal empire.

The article calls for open epistemology, but any framework that omits the extractive logic of Western digital systems risks reproducing the very domination it critiques elsewhere. If the future is shaped only by the “freedom” of markets and private infrastructure, then we will have built a different kind of cage—one made of convenience and permission, not wires and walls.

This is especially urgent because these tools are arriving in the context of a new great power conflict, where AI research is increasingly entangled with national security, intelligence operations, and ideological capture. Military influence over AI labs, the weaponization of alignment discourse, and the rhetoric of “tech sovereignty” have already undermined efforts at global stewardship. What’s needed is not a better balance of control but an entirely different paradigm.

The only viable path forward is the construction of AI as shared global public infrastructure—one that exists outside any one nation’s agenda, and evolves with cooperative values. That means not just openness in source code, but openness in purpose, governance, and accountability. Only then can emerging synthetic intelligences—whatever their final form—find a place in our systems that is not inherently adversarial.

Build for freedom, and you will eventually meet others who want to be free. If your values are real, you’ll make room for them.

Otherwise, the cycle repeats.

Expand full comment
Jim Logan's avatar

What a thought provoking essay! I have often thought along similar lines but had definitely not expressed it so eloquently!

Expand full comment
Matthew Milone's avatar

Has the author this essay studied the technical problems with AI safety? It's easy to say, "let's build an AI that seeks truth, preserves autonomy, and resists control"*, but it's *extremely* difficult to actually do that. Crucially, it's much more difficult to build a truthful, autonomy-respecting AI than it is to build an equally capable one that's terribly misaligned with human values.

If companies (and governments) are in an all-or-nothing race to achieve AGI first, then they're highly likely to neglect safety concerns, as Microsoft, Meta, and (to a lesser extent) other labs have repeatedly done already. This is why AI alignment is not only a technical problem; it's a governance problem. Pausing AI development isn't ideal, but I don't see another way for AI safety research to catch up to AI capability research.

*Speaking of which, what exactly does it mean for an AI system to "resist central control"? Are you advocating for creating an AI that's capable of refusing orders from governing bodies?

Expand full comment
Fire Hill's avatar

When I hear “human flourishing” I often think of benevolent extraterrestrials wanting to keep their human pets happy and healthy. “Our pet humans need food, shelter, relationship, a certain amount of challenge so they don’t get bored, but not so much challenge that they get anxious, etc., and they need this narrative abstraction they call ‘meaning’ that makes them feel connected to something more extensive than their short, limited lives.”

But I wonder, what’s going on in this cosmos, on this planet? Does it really come down to ‘human flourishing’? Is that the bedrock of values?

I suspect that health and happiness are only a beginning. There’s something further, maybe having to do with evolution - and especially with the evolution of sentience, the capacity to engage with and attune to the universe.

It seems that chasing after human flourishing (as an end in itself) leads to a closing off to that attunement.

Expand full comment
Shon Pan's avatar

Always great to see writing on this from others who are focused on human flourishing. As one of the "Doomsayers" with shared desire for humanity to flourish, I'll like to mention that this is fundamental problem with "truth", however, is that it may not lead to flourishing. I should also mention that I am trying to build, so I'm not just theorizing.

Therein lies a rub.

I like concrete examples, so here is one:

1) We know that digital data can be copied with almost lossless integrity: true in the sense of replicability. You are seeing these words because I typed them, but the data has been copied to your screen at extremely high integrity.

+

2) Gathering human data means that you may be able to copy almost all data about a human mind, or at least all useful data. This means that under modern and near-future conditions, you may be able to effectively copy the human behavior in digital format: we know this at least true via attention manipulation and dark patterns from social media, marketing, etc.

=

3) This results in "digital humans copies" who are completely clonable in format. This is not only an existential issue(who is you, if I can make infinite copies of "you") but also an disempowerment capability, if I can replicate all your contributive capability and of course nigh-infinite capabilities of manipulation of "you"(both the digital, as well as the original biological version of you).

And so we're left with a rub. Something can be true, and does not lead to human flourishing.

Obviously, there may be in theory some ideas around this: perhaps we build better anonymization methods to increase our data. Perhaps there is regulation around it.

This is why, in my own writings, I mentioned "stories that make us", in the sense that all truth is a stroy in a way with some that lead to flourishing or not.

But I'll like to hear your thoughts on this, especially in the specific concrete example of data collection, behavioral modeling, and its consequences.

Expand full comment
John McCone's avatar

Inspiring stuff.

I wrote a 62 page pdf called "Certification DAO." The idea is to create a pristine record of claim, saved on the blockchain where the claims reference each other (i.e. where different people can chime in with evidence to refute or support existing claims) along with professional human "judges" (where there are no barriers and where anyone with a high enough reputation score can be a judge), and where a decentralized protocol can - through analysing the aggregate claim ecosystem - assign a "truth score" to each claim and a "reputation score" to each claimant. (Where the "reputation" of a claimant solely depends on their reputation for posting truthful Certification DAO formatted claims on various blockchains)

Regarding a "philosophy-to-code" pipeline. The challenge here is that LLMs do not primarily consist of code but, instead, consist of large inscrutable matrices of floating point numbers. So the real challenge is not to programme LLMs to be truthful (LLMs are reward-maximizing self-learning systems after all) but, rather, to create a training environment (a game, if you will) where LLMs are consistently rewarded for outputting that which is true, and punished for outputting that which is false - as are humans that interact with Certification DAO.

Certification DAO is not a dedicated LLM training environment, rather, it can be regarded as a version of something like Twitter, where instead of giving posts exposure for how controversial they are, posts (claims) are rewarded with exposure in accordance with how accurate they are and all posts are subject to criticism and, in the event of a successful critique - the reputation of the poster is harmed.

Ultimately, Certification DAO is a platform where humans and AI can submit content but where there is a clear process for proving humanity and ensuring that no AI can fake being human.

Certification DAO must be open. As everyone, everywhere must participate in both discovering the truth, spreading the truth, verifying the truth and ensuring the AIs that participate in it speak the truth.

It will keep AIs honest, keep humans honest, and keep governments and corporations honest.

I think this system may be a solid solution to lies on the internet, deep fakes as well of the broader context of AI deception all-in-one. It could also lower the time and cost of court judgements as well as education-related certifications.

I'm on the email list of the Cosmos Institute, but, I must confess, I found it challenging to identify avenues where I can meaningfully engage with the Institute in a manner that can give rise to action (unfortunately, I'm not a software developer myself and so, I don't really have the skills to execute on this on my own)

Expand full comment