Discussion about this post

User's avatar
Paul Gibbons's avatar

I love how you use Frankfurt’s distinction between what we want and what we want to want to cut through the lazy “just quit social media” narratives. The insight about modern tech sustaining anticipation, not satisfaction, is sharp — it surfaces the cognitive tax we don’t talk about nearly enough.

Your argument that autonomy isn’t simply strength of will but a matter of structuring environment so good choice becomes the easy choice really resonates. It brings together philosophy, psychology and technology in a clean way.

One thing I’d add: this crisis of desire isn’t just personal—it’s institutional. We build systems that assume choice is free, but then engineer them for retention, scalability and profit. The result is a mismatch between the second-order self (the person I want to be) and the first-order appetite (the click, the scroll).

Looking forward to hearing where you take this next—especially how we might redesign the architecture of habit, not just treat the symptom. Well done bro!

Expand full comment
blake harper's avatar

OOO snap. I was scared when I read the title because I thought this would be similar to the "Harry Frankfurt meets algorithmic autonomy" essay that I pitched to Cosmos a few months back. Thankfully, it is not — and it's still a great essay, with plenty more one could say. This is the gist of what I pitched, in case you know of anyone else who has explored these ideas, or you have recommended reading.

Because algo-driven recommendation systems are behavioral, they are blind to desires that don't show up in our behavior. So they're blind to 2nd order desires (insofar as those dont' show up in our behavior). And if you agree that 2nd order desires are very closely related to our aspirational identities, then it really is FALSE that social media companies "know you better than you know yourself" as they sometimes claim.

There are a few tech/design solutions to this problem. First, most algo-driven rec systems have feedback mechanisms like "see more/see less" or "don't recommend content like this." Those help on the margin for, e.g. cases where the recovering alcoholic keeps getting recommended party-related content b/c they can't help but dwell on it, even though they'd rather not. They can tell the algo, "hey, yeah, no, enough of that bud."

But the more radical innovation would be if we could tell the algo up front (and throughout), "Hey, here's who I am, here's what i'm interested in, here's what I want to see." Even if those interests would have never shown up in your behavior (and so have never been inferred by the algo). With LLMs, we're tantalizingly close to that as a technological possibility. If we could unlock that possibility, it would enable recommendation systems to understand our aspirational identities, not just who we happen to be when our guard is down and we're lazily scrolling.

Expand full comment
5 more comments...

No posts

Ready for more?