Same Radio, Different Citizens
On the Economics of Human Formation
This post is co-authored by Brendan McCord (Cosmos Institute) and Philipp Koralus (University of Oxford).
Radio arrived in the early twentieth century as pure possibility. Electromagnetic waves carried human voices across continents. Within decades, three versions of the same technology had helped produce three different kinds of citizen.
In the 1930s and 1940s, the British Broadcasting Corporation (BBC) built programming that assumed its listeners wanted to become more than they were, with challenging content and genuine debate, designed to stretch rather than soothe. The funding model made this possible: residents paid a license fee for all BBC programming regardless of what they watched. No one had to maximize time-on-station. They could pursue something else.
American commercial radio faced a different pressure: advertising. Audiences were larger when programming demanded less. The market cleared at entertainment. What sold ads was what held attention, and what held attention was not what elevated the listeners.
Soviet radio under Stalin did not bother with the pretense of serving listeners at all. The technology became an instrument of state narrative designed to bluntly manufacture the appearance of consensus and compress the space for independent thought.
The same transmitters and waveforms propagating at the speed of light formed the technical basis in all three cases, but divergent selection pressures yielded divergent equilibria. The funding models were de facto governance regimes, entangled with the political order, shaping what could be said and what could be heard.
Looking back now at the introduction of the radio, it is easy to see that the question of whether “the radio” is conducive to the good or whether it advances human autonomy is hard to meaningfully address without considering funding models. Yet whenever new technology is deployed, public discourse tends to fall back to discussing the technology in itself without reference to its economic implementation. For example, people are largely happy to discuss the ills and benefits of social media, and even legislate on that basis, as if social media could be evaluated in a test tube for carcinogens in the way we might evaluate cigarettes.
Technology only defines a possibility space: the affordances, constraints, default pathways, and scaling properties that structure our interaction with the world around us. Institutional choices and funding models determine what position in that space becomes reality. Moreover, perhaps more importantly, institutional choices and funding models determine what position can remain in equilibrium over time. An institutional aim to pursue a particularly virtuous part of technological possibility space cannot be maintained if the funding model forces the institution over time to either change the aim or die.
Consider modern recommender systems. The possibility space is vast: systems that surface what users would endorse on reflection, systems that maximize time-on-site, or systems that optimize for learning or serendipity or connection. But the economics of advertising-funded platforms select for a narrow band of that space: the band where engagement can be monetized regardless of whether it tracks anything users actually value. The narrowing isn’t unique to ads; ads are just the clearest case where the reward proxy is orthogonal to reflective value.
Billions of minds are routed by recommender systems daily. The unexplored regions of the surrounding possibility space are not technologically inaccessible, but they are economically unselected. We think of economics here broadly: all inputs required for a system to carry on. Even a non-profit social media site endowed in perpetuity or supported by the state would still need to capture enough attention to remain “social” in any meaningful sense. The distinctions we’re drawing are orthogonal to narrow critiques of capital.
What Institutions Compute
The cognitive scientist David Marr proposed a simple framework for how to analyze an information processing system, regardless of whether it is a human brain or an AI system. He argued that there are three families of questions that need to be answered to have a full understanding of such a system, which he thought of as three levels of analysis.
Aim: What is the system trying to compute? What problem is it solving?
Mechanism: What procedures does it use? What’s the algorithm?
Substrate: What physical substrate implements those procedures? What’s the hardware?
Marr observed that each of these questions has some independence. What you are trying to compute does not fully determine your algorithm or your hardware, and so on. Because of this, we can study facts about aims, mechanisms, and substrates somewhat separately. Yet, Marr also observed that how we answer one family of questions constrains how we get to answer the others. You cannot simply declare that a system has some aim. The aim must be achievable by some procedure, and the procedure must be realizable in the substrate that is available. If the substrate of your system does not support any mechanism that could achieve your favorite aim, that aim can’t be part of the right explanation of what the system does.
This framework transfers to institutions. A company and tech products are also information-processing systems, that can be understood through three analogous families of questions:
Aim: What is the organization trying to achieve? What problem is it solving?
Mechanism: What procedures does it use? What are the routines, metrics, and product mechanics?
Substrate: What implements those procedures? What is the shape of structures like revenue, ownership, incentives, user participation, survival pressures?
As in Marr’s case, we get constraints across levels of questions: the lower levels shape what’s possible at the higher ones. A founder might genuinely aim for VC-funded virtue-in-a-box. But if the company’s survival makes sense over time, the avowed aim is unlikely to remain what best explains its behavior. The substrate constrains what aims are stable. The substrate does not uniquely determine aims, but it sets the boundary of the viable.
Consider a game studio that sets out to build something meaningful. Let’s say the aim is “earned mastery.” What earned mastery requires is not just skill at the game, but the formation that comes from genuine challenge: learning to lose, to persist, to improve through practice. The mechanisms of the game serve that aim, with matchmaking, difficulty curves, and progression that rewards effort. The economics are aligned. Players pay once, and the only path to power is getting better. Aim, mechanism, substrate, all pointing the same direction.
Then growth becomes the mandate. Loot boxes convert well. Players want to feel powerful now, not after fifty hours. As a result, mechanics emerge that let money substitute for skill.
No one at the game studio explicitly decided to abandon the mission. Many probably saw where things were heading. But each choice was locally rational. What got measured became what mattered; what mattered became what survived; what survived became the real aim. The specter of Goodhart’s law looms large and the feedback loop doesn’t care about the pitch deck. It optimizes for what it can see. By the end, “earned mastery” appears in the mission statement and nowhere else. The aim became a ghost. Not abandoned, just no longer part of the best explanation of the company as a system, or the best model for decisions within it.
A distinction matters here. Engagement that tracks value (e.g., mastery, truth, community, or genuine satisfaction) is what good products should generate. The problem is engagement engineered through compulsive behavior, asymmetry, and the exploitation of human weakness. In the worst case, we substitute habit for growth, money for skill, manufactured urgency for actual importance.
Returning to radio, we might say that the BBC avoided structural drift in its heyday because license fees created no pressure to hold attention for advertisers. The economics supported the stated mission rather than corroding it. American commercial radio found other priorities because advertising revenue required capturing attention and capturing attention rewarded different mechanisms than cultivating it. Those mechanisms eventually became the point. This is not a case for license fees as such—the BBC subsequently developed other pathologies, and any funding model creates its own selection pressures. The point is the structural logic, not the specific policy.
The framework we described is analytically neutral. It tells you how aims, mechanisms, and economics can be articulated independently yet alerts you to how they constrain each other in real systems, particularly as they persist over time. It doesn’t tell you what aims are worth pursuing. You could use it to help design effective propaganda as easily as effective education.
Every information environment is a training regime: it determines what people practice noticing, what they practice ignoring, and therefore what they become capable of judging. The institution that shapes attention is in turn shaping the citizen.
Descriptive frameworks are useful if they help us understand systems, and more useful when they help us articulate what systems we want. When we say the BBC “cultivated” and commercial radio “captured,” we are claiming that some outcomes are better than others. Specifically, preserving and developing human judgment matters because judgment is the foundation of autonomy. Systems that cultivate judgment expand what a person can do and be. Systems that atrophy judgment reduce the agent to a consumer of impulses by executing preferences they never made their own and optimizing for ends they never deliberately chose.
If we are going to reason about institutional design at all, we need to be explicit about what we are designing for. That is a prior question, and ignoring it just means your values operate without scrutiny. For our part, we take human autonomy as our prior commitment.
From this perspective, the question for any technology that touches human judgment is not “is this technology good or bad?” The question is: in what institutional arrangements is the technology implemented, and do those arrangements create structural pressure toward or away from the cultivation of human judgment?
In practice, this means asking:
What aim does the economic structure sustain over time? The mission statement or the aim that survives contact with the feedback loops?
What mechanisms does that aim require? Do the proposed mechanisms only work in a vacuum to deliver your aim, or could they stably work to deliver your aim given the economics that you are prepared to implement?
What do those mechanisms do to the humans who use them? The effect you designed for, or the effect at equilibrium?
Many companies cannot answer these questions honestly. The mission statement says one thing; the incentive structure produces another. The founders may believe in the stated aim; the economics select for something else.
If autonomy is the commitment, we can ask what design criteria this would entail. One way to proceed is then to articulate those criteria as constraints or tests. We suggest two tests below as minimal starting points:
1. The Transparent Choice Test: Could this product survive in a suitably nearby world of users who fully understand how the product works and could easily select an alternative?
A product fails this test if its economics depend on the gap between informed choice and actual behavior: it only works because users don’t fully understand it or can’t easily leave. Note that the test brackets market power. Whether it is easy to switch in the actual world is separable from whether people would switch if they could, in a suitably nearby world. A virtuous product might still be a monopoly; a monopoly might still make a good product. Whether monopolies are bad for other reasons is a separate question.
2. The Candid Aim Test: Is the stated aim key to the best explanation for how the system actually behaves and how its parts are put together?
If the stated aim is real, reference to it should make the system’s behavior intelligible and more predictable. It’s a red flag when a different aim explains more, particularly so if it is an aim that is not just orthogonal but contrary to the stated aim. Consider a supermarket with a cash register that makes subtle addition errors at the threshold of detection, always in the store’s favor. If this has been going on for years, “fair dealing” stops being a plausible aim of the store. Or consider a recommender system that reliably foments polarization. At some point, “building community” explains less than “maximizing clicks.”
Neither the Transparent Choice Test nor the Candid Aim Test is passable long-term if the economics pull against the aim.
Structure, Not Will
Structural drift cannot be resisted at the level of individual will. A founder with integrity, operating within a misconfigured stack, will be selected against by the institutional arrangements that structure their team, product, and market. If you have spent years inside one of these companies advocating for users over engagement and eventually burned out, it wasn’t a failure of will. The economics select for certain equilibria and not others. You can resist that gradient for a while, but you cannot reverse it through conviction alone.
The intervention must be structural, not only personal. This is the work of what we call philosopher-builders: configuring the stack itself.
A skeptic will ask: if a firm adopts constraints that reduce short-term competitiveness, won’t the market select it out? Isn’t anything outside of profit maximization founder self-indulgence? But this assumes profit is already defined before we’ve chosen what game we’re playing. The time horizon, the measurement regime, and what gets externalized are not givens. The builder—the philosopher-builder—chooses them.
Some commitment devices reduce competitiveness while others become the competitive advantage. Substack’s bet on writer ownership creates lock-in through loyalty rather than switching costs. The Bloomberg terminal succeeds not despite its commitment to decision-quality over engagement, but because of it. Patagonia’s environmental constraints built a brand that commands premium pricing.
Not every structure survives in every market. Some commitment devices only work with certain capital structures, or in markets where trust is differentially valuable, or with customers who can recognize quality. That’s precisely why governance, capital philosophy, and culture matter. The economics of the endeavor are not given. They are chosen—by founders, by investors, by the people who set the constraints within which everyone else optimizes.
This reframes what it means to work on technology that matters.
The question is not whether you’re an engineer or an executive. The question is whether you’re operating at the level of institutional design going all the way from formulating aims to economic implementation, or only at the level of features. Both matter, but the institutional level in this broad sense is prior. It determines what the feature level can achieve.
An engineer who designs a brilliant recommendation system but ignores the incentive structure it operates within has ceded the most important decisions to someone else. An engineer who understands how funding models shape optimization targets, how metrics become attractors, how governance locks in or erodes mission is operating at the level where outcomes are actually determined.
Every investment thesis contains an implicit theory of what is of value. Every term sheet is a design document for what will be optimized. Every board seat carries influence over what gets optimized. These decisions ripple outward into mechanisms, and mechanisms ripple outward into the lives of everyone who uses what gets built.
The question is whether this influence gets exercised deliberately or by default.
Default means optimizing for what’s measurable, what’s familiar, what’s already worked. Default means engagement metrics and growth curves and the patterns that produced returns last cycle. Default means structural drift toward whatever the economics select for, regardless of stated intentions.
Deliberate means asking the three questions. It means applying the Transparent Choice Test and the Candid Aim Test. It means designing economic structures that can sustain aims worth pursuing, and locking those structures in before the drift begins.
At the present stage of AI development, this framework is not an abstract lens but a live description of the game being played. Frontier systems are expensive to build and are increasingly financed by capital invested on roughly the thesis that the technology will ultimately be applicable to almost every aspect of the economy. In that sense the capital is not buying a product so much as underwriting a general-purpose substrate and hoping to later discover the dominant interfaces through which it captures value. That option-like thesis makes the “aim” unusually plastic: the same underlying model can become tutor, co-worker, companion, bureaucrat, sales engine, surveillance layer, or propaganda instrument. Which of those becomes stable is a property of the institutions, financing models, and culture that wrap it.
Because the return story is “almost everything,” the equilibrium remains underdetermined. Selection pressure is already operating, but often through indirect channels: control of distribution, accumulation of compute, narratives that justify scale, the ability to externalize risk, and the power to set default contracts and defaults-of-use.
Different capital structures will stabilize different aims. Usage-based pricing tends to reward systems that are instrumentally useful; subscription models reward systems people would still choose under conditions of autonomy more than attention-economy models do; enterprise procurement tends to reward auditability and legible governance; state funding tends to reward control.
The degree of freedom is real precisely because the equilibrium has not yet fully formed. Decisions made at this pivotal moment by technologists, founders, and investors can still dramatically shape outcomes we care about.
The attention economy shows what happens when the substrate hardens and aims collapse into a single attractor. If revenue is selling predictable attention to advertisers, then maximizing attention capture becomes the stable aim, engagement proxies become the mechanism, and surveillance plus lock-in becomes the substrate that keeps the loop from breaking. The equilibrium is narrow and self-reinforcing, and the possibility space remains technologically available but economically unselected.
Frontier AI is not yet locked into that attractor, yet it could inherit it quickly if general-purpose models are primarily deployed as next-generation attention routers. If autonomy is the commitment, this is exactly the moment when financing, ownership, and governance choices matter most: before the underdetermined equilibria congeal into infrastructure and the aim becomes a ghost again.
What got built around radio in the twentieth century shaped what kind of citizens could emerge. What gets built around AI now will do the same, at greater scale and speed.
The window is still open. Which citizens are you building for?
Cosmos Institute is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund fast prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.






I read media form, funding model, and citizenship in your essay as downstream variables—but downstream of a more basic question: where the cost of being wrong actually lands.
Once language can continue without consequence, institutions can still function, media can still train attention, and citizens can still participate—without anyone exercising judgment.
I tried to trace that rupture historically through Walter Cronkite—not as a trusted voice, but as evidence of a funding and governance stack that kept judgment anchored here:
https://amandaross2.substack.com/p/where-have-you-gone-walter-cronkite