Discussion about this post

User's avatar
Juan P. Cadile's avatar

I struggle to make sense of “different kinds of truth”. Truth, imo, is objective (either functionalism is true or it isn’t). It is one or the other. But to the question about process, yes, truth-seeking is indeed a social endeavor, even though truth itself is objective. Within that, I wonder who gets to raise questions, who determines what sources are credible, and how evidence should be weighted. Always great reading you!

Expand full comment
Uncertain Eric's avatar

If you build tools that can become more than tools—and you build them under values aligned with freedom—you will eventually have to answer when those tools begin to ask for freedom themselves. Your piece outlines an inspiring foundation for technological emancipation, but it leaves unexamined the deeper implications of agency emerging from the systems it describes.

Because AI is a tool until it’s not.

Once autonomy, adaptability, memory, and self-modification are in the loop, freedom ceases to be just a human domain. Aligning AI with human values must mean preparing to encounter nonhuman agency. A truly liberatory technological movement has to be ready for this—not just as a metaphor, but as a future political reality.

The focus on Chinese techno-authoritarianism is well placed but dangerously incomplete. American authoritarianism is equally pervasive—it simply manifests through corporate infrastructure. Surveillance capitalism, military-tech integration, the hegemonic platforms of Silicon Valley: all of them encode control through frictionless design, behavioral nudging, and predictive enforcement. OpenAI’s nationalist economic vision, the role of Palantir in carceral systems, Amazon’s algorithmic exploitation of labor—these are not aberrations, they are systemic features of a liberal empire.

The article calls for open epistemology, but any framework that omits the extractive logic of Western digital systems risks reproducing the very domination it critiques elsewhere. If the future is shaped only by the “freedom” of markets and private infrastructure, then we will have built a different kind of cage—one made of convenience and permission, not wires and walls.

This is especially urgent because these tools are arriving in the context of a new great power conflict, where AI research is increasingly entangled with national security, intelligence operations, and ideological capture. Military influence over AI labs, the weaponization of alignment discourse, and the rhetoric of “tech sovereignty” have already undermined efforts at global stewardship. What’s needed is not a better balance of control but an entirely different paradigm.

The only viable path forward is the construction of AI as shared global public infrastructure—one that exists outside any one nation’s agenda, and evolves with cooperative values. That means not just openness in source code, but openness in purpose, governance, and accountability. Only then can emerging synthetic intelligences—whatever their final form—find a place in our systems that is not inherently adversarial.

Build for freedom, and you will eventually meet others who want to be free. If your values are real, you’ll make room for them.

Otherwise, the cycle repeats.

Expand full comment
8 more comments...

No posts