26 Comments
User's avatar
QuasiAntipodean's avatar

What you’ve written resonates deeply. I’m one of the people who walked away from systems that optimized institutions or profit over purpose and instead built one grounded in restoration, discernment, and generational renewal.

We don’t need more clever tools, we need more whole humans.

That’s what I work on. Happy to share what I’ve learned if the tribe is gathering.

Expand full comment
Tashi Nyima's avatar

We don’t need more smart people, we need more compassionate people.

Expand full comment
Alexander Humpert's avatar

Couldn’t agree more! It’s so seductive to build just for building’s sake.

I’m hopeful that AI will lower the barriers that have divided the social sciences from STEM and create a new generation of builders that put people first - and not just from a “good UX/UI” perspective

Expand full comment
Hollis Robbins (@Anecdotal)'s avatar

Absolutely! My view is that we need to start encouraging a building (rather a measuring, correcting, or policing) imagination early. We have the tools to better envision with AI. Imagine new worlds! https://hollisrobbinsanecdotal.substack.com/p/toward-science-fiction-education

Expand full comment
一休兒's avatar

I had a similar vision recently while I’m writing https://yixiuer.substack.com/p/next-gen-compute-platform. Just like what Albert Einstein said “Imagination is more important than knowledge. Knowledge is limited. Imagination encircles the world.”

Expand full comment
Jesse Parent's avatar

Yes, I am working specifically on this sort of destination awareness in some of my projects: "Every builder’s first duty is philosophical: to decide what they should build for. This duty has largely been forgotten."

We need to enable, teach, and empower those to be able to do this. Time is of the essence, as pace and acceleration of certain aspects of our lives are rushing ahead, as others stand still, or appear to go backwards. How can we develop thinkers, doers, and builders within this context? That's a major challenge of our shared moment.

Expand full comment
Alexander Humpert's avatar

Timely and well-articulated piece.

As AI lowers the barriers to entry, we’ll likely see more people from the humanities shaping the design of our technological system, hopefully embedding Enlightenment ideals in the Humboldtian sense: technologies that minimize coercion and protect the freedom to inquire and create.

At the same time, we’re witnessing the rise of a new elite class in big tech… consolidating power and often promoting ideologies fundamentally at odds with democratic principles (Curtis Yarvin comes to mind).

The tension between these two trajectories will define much of what’s to come!

Expand full comment
Being Jolly's avatar

Thank you!

Expand full comment
Thomas F. Webber's avatar

I agree with this sentiment. It feels like what they are trying to do with Khan academy. It makes me wonder what ideologies are represented within design when there isn't this step of mindfulness you're advocating for? Are there ways to spot these engineered heedlessness's?

I want to build for meta-analysis and polycentric considerations, to help minimize risk of biased logic.

Expand full comment
Sam McRoberts's avatar
Jigar Sompura's avatar

The invisible hand of self-interest cannot be ignored. Yet, this philosopher-builder appears to challenge it. I believe that the notion of business owners prioritizing human flourishing above their own self-interest is an idealistic belief.

An adaptive and emergent approach is always in effect; it operates continuously, whether in the realm of planned order (taxis) or spontaneous order (cosmos).

Expand full comment
Jigar Sompura's avatar

Democracy and republican systems have arisen less from the vision of individual founders like Benjamin Franklin and more from a dynamic process driven by technological change, journalistic scrutiny, and the continual adjustment of self-interest within institutions. Their emergence reflects society’s adaptive evolution, rather than the design or influence of any one person.

Expand full comment
Melon Usk - e/uto's avatar

Awesome! Yep, we modeled the ultimate futures for 3y+ - turns out it's possible with just one assumption: non-forcing - people don't like being forced and so they'll make all of those things optional

We're pretty sure the best ethical ultimate future will have the direct democratic simulated multiverse as an optional video game. We shared a lot about how it looks, with photos and videos, the mechanics of being there, ethicalized computational physics of it, how to start profitably building it (half a dozen unicorn startups ideas), everything really

Expand full comment
stereomono's avatar

куда заведет этот прогресс узнают люди позже

Expand full comment
stereomono's avatar

интересно а птица которая вьет свое гнездо она тоже зависит от философии строительства гнезд в самых не подходящего для этого местах или это уже другое кстати надо узнать у птичек устроив социологический опрос с большим охватом чтобы они ответили на все вопросы специалистов которые потом отдадут свои наработки виртуальной машине вот она и поставить последнюю точку в докладе наверное

Expand full comment
Emanuel Piza's avatar

I will build for an egoless society, which means allowing people to travel, live, and contribute to their environment without relying on traditional means (such as family, accumulated wealth, a CV...)

When society doesn't require this to function, it becomes pointless to do it.

Expand full comment
Michael von Prollius's avatar

Thank you for the dedicated statement!

Reminds me of Edward Munroe.

TV presenter Edward Munroe is or used to be famous in the US, embodying integrity. In the 1950s, he fought courageously for freedom of expression and the informal compulsion of better arguments.

With his sophisticated style and rhetoric, the reporter, who rose to fame during World War II with his reporting from England, took on the influential Senator Joseph McCarthy, who led a nationwide witch hunt against alleged communists and dissidents.

Munroe achieved the unexpected and brought McCarthy to his knees. At the same time, he was unable to prevent the decline of television.

In George Clooney's “Good Night, and Good Luck,” David Strathairn gives a convincing performance as the exceptional journalist Munroe. The film powerfully conveys the importance of ideas, information, and convictions. Munroe always believed that the audience was not as simple as people claimed and did not just want to be distracted and entertained. Television could teach, enlighten, and inspire. “But it can only do that to the extent that people are determined to use it for those purposes. Otherwise, it's just a bunch of wires and lights in a cabinet,” Munroe affirms in the final scene.

Expand full comment
Manolo Remiddi's avatar

Thank you for this framework. As a practitioner building a symbiotic AI partner in a live experiment, our work confirms your thesis and suggests the bridge from philosophy to code is reason guided by a deeper resonance.

It is the veteran practitioner's intuition, a felt sense of alignment forged over years of deep craft, that allows us to navigate the profound complexity where pure logic falls short.

Expand full comment
Tyler Corderman's avatar

Incredible, thank you.

Expand full comment
Rickie Elizabeth's avatar

I’ve been waiting for more people to cover the relationship between technical frameworks and their implicit moral claims. I was glad to see you lay it out clearly; it was an interesting read.

Your line “Moral clarity creates market opportunities” is compelling, although I’d argue the inverse is more common—market logic getting retrofitted/repackaged as moral reasoning. While builders with no philosophy beyond metrics and trends are definitely a part of the problem, the more subtle yet no less impactful risk is builders whose assumptions are embedded in systems and treated as if they’re apolitical or self-evident merely because they present as such.

Such a framing lets designers treat autonomy as a usability problem, and engagement/consensus as if it were evidence of truth. People mistakenly interpret optimization as if it’s intentional design, even when there’s no clear aim behind it.

So, we know values are present, but the big wuestion is what kind of world these systems are subtly training people to accept, and whether we’re building tools that reinforce passivity, or tools that do anything to help sharpen people’s capacity to think/choose and dissent. That’s where the philosophical work really begins. But the much harder question is whether anyone in power actually wants that.

Expand full comment