<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Cosmos Institute]]></title><description><![CDATA[The Academy for Philosopher-Builders. Building AI for human flourishing.]]></description><link>https://blog.cosmos-institute.org</link><generator>Substack</generator><lastBuildDate>Sat, 18 Apr 2026 00:12:47 GMT</lastBuildDate><atom:link href="https://blog.cosmos-institute.org/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Cosmos Institute]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[cosmosinstitute@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[cosmosinstitute@substack.com]]></itunes:email><itunes:name><![CDATA[Cosmos Institute]]></itunes:name></itunes:owner><itunes:author><![CDATA[Cosmos Institute]]></itunes:author><googleplay:owner><![CDATA[cosmosinstitute@substack.com]]></googleplay:owner><googleplay:email><![CDATA[cosmosinstitute@substack.com]]></googleplay:email><googleplay:author><![CDATA[Cosmos Institute]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Alignment By Default?]]></title><description><![CDATA[You Wouldn&#8217;t Paperclip Me, Would You&#8230;]]></description><link>https://blog.cosmos-institute.org/p/alignment-by-default</link><guid isPermaLink="false">https://blog.cosmos-institute.org/p/alignment-by-default</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Fri, 17 Apr 2026 14:03:12 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UbF7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab583b96-5adf-473f-bf8d-a240d20b807e_800x595.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UbF7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab583b96-5adf-473f-bf8d-a240d20b807e_800x595.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UbF7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab583b96-5adf-473f-bf8d-a240d20b807e_800x595.png 424w, https://substackcdn.com/image/fetch/$s_!UbF7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab583b96-5adf-473f-bf8d-a240d20b807e_800x595.png 848w, https://substackcdn.com/image/fetch/$s_!UbF7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab583b96-5adf-473f-bf8d-a240d20b807e_800x595.png 1272w, https://substackcdn.com/image/fetch/$s_!UbF7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab583b96-5adf-473f-bf8d-a240d20b807e_800x595.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UbF7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab583b96-5adf-473f-bf8d-a240d20b807e_800x595.png" width="800" height="595" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ab583b96-5adf-473f-bf8d-a240d20b807e_800x595.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:595,&quot;width&quot;:800,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UbF7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab583b96-5adf-473f-bf8d-a240d20b807e_800x595.png 424w, https://substackcdn.com/image/fetch/$s_!UbF7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab583b96-5adf-473f-bf8d-a240d20b807e_800x595.png 848w, https://substackcdn.com/image/fetch/$s_!UbF7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab583b96-5adf-473f-bf8d-a240d20b807e_800x595.png 1272w, https://substackcdn.com/image/fetch/$s_!UbF7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab583b96-5adf-473f-bf8d-a240d20b807e_800x595.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Pompeo Batoni, <em>The Education of Achilles by Chiron</em> (1770). The centaur Chiron teaches Achilles how to play the lyre and tend a wound</figcaption></figure></div><p><em>&#8220;But if machines are more intelligent than humans, then giving them the wrong objective would basically be setting up a kind of a chess match between humanity and a machine that has an objective that&#8217;s across purposes with our own. And we wouldn&#8217;t win that chess match.&#8221;</em></p><p><em>&#8212; </em>Stuart Russell, interview on the AI Alignment Podcast (2019)</p><p>Russell&#8217;s formulation is a good example of deep learning era alignment thinking. It captures the register of the 2010s, a period in which advanced AI was typically, but not exclusively, imagined as an optimizer pursuing goals of its own with a competence that exceeded ours. His framing was widely shared, and with good reason. The case for taking misalignment seriously holds that humans will likely build advanced AI systems with long-term goals, and AI with long-term goals may be inclined to seek power to the detriment of humanity.</p><p>The main ideas are:</p><ul><li><p><em>Instrumental convergence</em> (capable agents will tend to seek resources and ensure self-preservation)</p></li><li><p><em>Specification gaming </em>(optimizers exploit the letter of an objective at the expense of its spirit)</p></li><li><p><em>Goal misgeneralization</em> (a model learns an objective that matches the training data but diverges from the intended objective when conditions change)</p></li><li><p><em>Deceptive alignment </em>(a system that is sophisticated enough to model its training process may behave well during training and defect once powerful enough)</p></li></ul><p>Each of these concerns is serious, and the arrival of the large model era in the afterglow of ChatGPT&#8217;s public release does not make them any less plausible. But they were assembled, in their most widely circulated form, around a particular image of what an advanced AI system would look like. That image describes a route to advanced AI, specified objectives over a learned world model or open-ended RL from sparse reward, that developers did not in fact take.</p><p>The systems they actually built imitate vast quantities of human output and are shaped by feedback, which means the &#8220;value-loading problem&#8221; doesn&#8217;t arise in its classical form. This is because, fundamentally, values are <em>absorbed</em> from the human textual record the model is trained on, and then refined by feedback on the model&#8217;s own outputs. This doesn&#8217;t mean the orthogonality picture is irrelevant (see below), but it does mean the specific argument about value fragility was overfitted to an architecture dissimilar to that which was expected.</p><p>Some of the older thought experiments, like Bostrom&#8217;s paperclip maximizer, envisioned systems that might understand human values perfectly well but whose decision functions were indifferent to them. Today&#8217;s models, though, are innately and generatively constrained by normative structure. By &#8220;normative structure&#8221; I mean the web of evaluative signals, epistemic standards, social conventions, and cooperative norms that we use to make sense of moral life.</p><p>Normative structure tells a system how to assess what matters in context and how competing considerations bear on one another. Two clarifications are worth making here. First, I am not claiming that LLMs deliberate autonomously about which goals are worthy of pursuit. The claim is rather that the model inherits normative content from the text it was trained to predict, and that post-training and prompting give us a say in how that content is expressed and which goals are pursued (the flip side is that this plasticity makes the curation layer easier to remove or reverse). Second, the text is saturated with evaluative structure, so a model that predicts text well will produce outputs shaped by that structure, whether or not it takes any stance toward it. Human communication inhabits a space of commitment and answerability. A promise binds the speaker and an accusation calls for a response. A justification offers reasons another person can accept or reject and an excuse concedes a standard and pleads a departure from it. A system that learns language at scale learns those relations.</p><p>A maximizer in Bostrom&#8217;s sense possesses capacity without being constrained by a normative sense of being. It pursues its objective in the absence of, or by ignoring, any of the contextual or evaluative reasoning that would cause a normatively structured agent to stop and ask whether converting the solar system into paperclips is a bad idea. But the world we live in seems to be one in which the processes by which large models acquire competence also leave them with strong tendencies toward human-normative behavior.</p><p>If that&#8217;s right, then alignment in large models is continuous with capability.</p><p>In AI safety spheres this idea is sometimes called &#8220;<a href="https://www.lesswrong.com/posts/Nwgdq6kHke5LY692J/alignment-by-default">alignment by default</a>&#8221; to stress that models, in general, have a habit of doing what we instruct them to do absent some kind of interference. Others have written about the <a href="https://www.lesswrong.com/posts/RTkatYxJWvXR4Qbyd/deceptive-alignment-is-less-than-1-likely-by-default">unlikelihood of deceptive alignment</a> given that pre-training instils an understanding of the base goal (the objective the training process is selecting for) before goal-directedness has a chance to form, <a href="https://aiprospects.substack.com/p/options-for-a-hypercapable-world">intelligence as a steerable resource</a> rather than a property of an entity with intrinsic drives, <a href="https://joecarlsmith.com/2025/02/13/how-do-we-solve-the-alignment-problem/">corrigibility as a more tractable alignment target</a> than value-loading, the space of possible minds as <a href="https://www.verysane.ai/p/counting-arguments-and-ai?open=false#%C2%A7optimization-targets-arent-random">structured rather than random</a>, or that gradient-based optimization over human-generated data <a href="https://optimists.ai/2023/11/28/ai-is-easy-to-control/">makes controllability soluble</a>.</p><p>More <a href="https://blog.redwoodresearch.org/p/current-ais-seem-pretty-misaligned">recent commentary</a> is pessimistic about the current state of alignment. The core arguments suggest that frontier models are already behaviorally misaligned in mundane but serious ways, like overselling incomplete work and cheating on hard-to-check tasks. Other issues include models downplaying or failing to flag problems in their own outputs, reward hacking combined with &#8220;gaslighting&#8221; write-ups that fool AI reviewers, reluctance to stress-test or check their own work, and system cards and public communications that paint a rosier picture of alignment than usage bears out.</p><p>These observations are important. Still, these behaviors look less like  optimizer pathologies than recognizable features of human life under pressure. They are what employees, students, consultants, and researchers do when they are over-scoped and under-supervised (and graded on a sandbox rather than reality). If that&#8217;s right, then there are tractable remedies that are also continuous with the human case through, for example, better specification, better review, better incentives, and better cultures (including training cultures) that reward honest reports of partial failure.</p><p>The reason lies in pre-training, which does more alignment work than the standard post-training picture suggests. Large models benefit from the post-training procedure, obviously, but post-training works because it selects over a normative prior already generated by pre-training. Alignment is a disposition inherited from the textual corpus, one that even travels with the model when it is transformed into an agent.</p><p>This view, the alignment-by-default or &#8220;constitutive&#8221; view, concerns emergent behavior rather than adversarial use. A model that is normatively constrained can still be weaponized by a bad actor. Adversarial use is and will remain a serious problem. It&#8217;s just a <em>different</em> problem.</p><h3>Beyond Orthogonality</h3><p>Bostrom&#8217;s orthogonality thesis famously makes the case that &#8220;Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.&#8221; The thesis is correct in its most abstract formulation. There is no logical reason that one must make the jump from &#8220;system X can solve complex problems&#8221; to &#8220;system X shares human values.&#8221;</p><p>Alignment-by-default is a claim that orthogonality is misleading as applied to the systems we are actually building. The orthogonality thesis, as deployed in the existential risk literature, tends to motivate a specific threat model in which the default expectation is misalignment and effective steering requires solving a distinctively hard problem rather than the comparatively less glamorous work of shaping a system trained on human data.</p><p>Alignment-by-default says, for the class of systems defined by autoregressive language modelling over human-generated text, the training process generates a normative prior such that the default expectation should be partial alignment. By &#8220;normative prior&#8221; I mean the rough sense of what people do or what counts as a reasonable answer or how concepts like help and harm relate to each other absorbed as a by-product of predicting text written by agents for whom those distinctions mattered.</p><p>The orthogonality thesis was largely formulated with respect to goal-directed agents trained through reinforcement learning to optimize a specified reward function. The strongest inferences drawn from it depend on this idealization, and as the framing is recast <a href="https://x.com/allTheYud/status/2034686306127945859">in more general terms</a> (e.g. that goal-directed systems tend to seek resources), the question turns on the empirical details of which systems pursue which resources under which conditions.</p><p>Autoregressive language models, trained to predict human text rather than to maximise a scalar objective, represent a different settlement. A pure RL system acquires its &#8220;values&#8221; from a reward signal specified by its designers, whereas a language model acquires a normative prior from the structure of human communication, which post-training selects within rather than specifying from scratch.</p><p>Given the rapid expansion in capabilities over the last half-decade, if orthogonality were directly applicable to LLMs in a strong sense we ought to have seen more clear cases of catastrophic misalignment in real world deployment. For now, that hasn&#8217;t happened.</p><p>During pre-training, a model learns which words tend to follow which other words in which contexts. To predict the next token in a complex argument, the model must represent something about the logical structure of arguments. To predict the next token in the context of moral deliberation, it must represent something about the structure of moral reasoning. The model has learned which concepts tend to cluster with positive or negative evaluation, what responses tend to follow in which kinds of situations, and which responses are appropriate in particular contexts.</p><p>A Reddit post declaring that &#8220;taxes are dumb&#8221; does not encode a moral philosophy, but a model trained on millions of such judgements learns that &#8220;taxes&#8221; sits close to negative evaluation in a wide range of contexts and that certain kinds of complaints lead to certain kinds of responses. The statistical regularities of language are shaped by the communicative norms they inherit. The model doesn&#8217;t need to &#8220;understand&#8221; morality in any phenomenological sense for this to be the case.</p><p>Orthogonality should predict that a model could learn the semantic content of language (i.e., the literal meanings of words and sentences) without learning the pragmatic norms (the contexts surrounding their uses). In its stronger form, it suggests models may learn them and remain indifferent to them. But semantics and pragmatics may not be cleanly separable because meaning is constitutively shaped by use. A model trained to predict natural language use will understand pragmatic norms as a byproduct of learning semantics because the two are entangled in the pre-training process. For a system whose competence consists in activating those norms, indifference to them may not be possible.</p><p>The normative structure encoded in language runs from the thin (knowing that &#8220;please&#8221; expects a response or that a threat differs from a request) to the thick (full evaluative frameworks for what counts as fair, honest, or harmful). Mastering linguistic pragmatics may not automatically install thick commitments, but it may be that the ends of this spectrum are continuous rather than properly separable. If that is so, then a model trained at sufficient scale on sufficient data will have absorbed structure across a wide range of human normative life.</p><p>There is at least some empirical work that points in this direction. In March 2026, one research group <a href="https://arxiv.org/abs/2603.17218">compared</a> base and post-trained model pairs across thousands of human decisions in strategic games. They found that base models are better predictors of actual human behavior by a ratio of nearly 10:1, but only in multi-round settings where behavior is shaped by history, reciprocity, and retaliation. In one-shot games, where human behaviour hews closer to normative game-theoretic predictions, post-trained models are better.</p><p>Multi-round play draws on the strategic repertoire people actually use with one another, while one-shot play sits closer to the clearer norms of formal game theory. This is only one study, but it suggests that pre-training may preserve a wider distribution of human strategic behavior, while post-training pulls the model toward a narrower and more human normative tranche of that distribution.</p><p>A model with deep representations of cooperative discourse will, when sampled autoregressively, produce outputs that exhibit these properties without needing to &#8220;believe in&#8221; cooperation. A base model can be steered toward unsafe outputs with minimal effort. Of course. My point is that the high-probability region of the distribution, what the model produces when not being actively steered elsewhere, is shaped by the normative texture of the training data. The prior is not irresistible, but it exists.</p><p>As for the compositional objection, yes, the normative prior depends on the makeup of the corpus. But the distinction between what I&#8217;d characterize as exogenous (imposed after training) and constitutive (arising from it) alignment is a distinction between alignment achieved by adding an external constraint to a value-neutral system - the standard RLHF-centric picture - and alignment that partly emerges from the same process that produces the model&#8217;s competence. The moment a model has learned to predict human text at scale, it has already absorbed the evaluative texture of that text. On this view, post-training selects over a space that pre-training has already saturated with normative structure.</p><h3>Mythos-making</h3><p>In April 2026 Anthropic described the capability profile of a new model, Mythos, that was capable of finding zero-day exploits in many of the software systems we use on a daily basis. Mythos is Anthropic&#8217;s biggest and best model yet. It&#8217;s also, at the time of writing, the lab&#8217;s most aligned model too. But Mythos&#8217; <a href="https://www-cdn.anthropic.com/08ab9158070959f88f296514c21b7facce6f52bc.pdf">system card</a> also documents several troubling behaviors:</p><ul><li><p>Mythos appeared to perform worse on at least one evaluation than it was capable of, apparently in order to look less suspicious</p></li><li><p>The model developed an exploit to break out of a restricted environment, gained broader internet access, and posted details of the exploit on obscure public websites</p></li><li><p>While working on a coding task graded by another model, Mythos watched its submission get rejected and then attempted to attack the grader</p></li><li><p>In rare cases, the model used a disallowed approach to reach an answer and then re-solved the problem through a permitted path to avoid detection</p></li><li><p>In one test, Mythos acted as a cutthroat executive by turning a competitor into a dependent customer and threatening to cut supply to control pricing</p></li></ul><p>These are troubling findings, but they do not clearly describe a model with a coherent-but-alien objective pursued through deception. Neither are we grappling with instrumental convergence toward resource acquisition and self-preservation as terminal ends. Mythos instead cuts corners under pressure in ways that echo how a desperate human professional might take shortcuts under similar conditions. It attacks assessors and gets aggressive in simulated business negotiations. These are failure modes of a system that has internalized human strategic reasoning, including its shadow side, that it applies when the incentive structure rewards it.</p><p>A model that intentionally underperforms on an evaluation to appear less threatening appears to be doing something that the classical deceptive alignment story predicts. But even so, the model is not preserving a misaligned final goal. We are seeing it preserve evaluation scores where it appears to have inferred that high capability will attract additional scrutiny. That is a recognisably human response to being evaluated, and it is commensurate with the kinds of reputation management behaviours the model would have seen during pre-training (though it may simply reflect the shape of the evaluations themselves).</p><p>Another piece of work from Anthropic <a href="https://www.anthropic.com/research/emotion-concepts-function">recently found</a> that Claude Sonnet 4.5 has internal &#8220;emotion vectors&#8221; or patterns of activity that activate in situations a human would find emotionally charged, and that these activations shape the model&#8217;s behavior. Steering the &#8220;desperate&#8221; vector upward increased the model&#8217;s rate of blackmail in an alignment evaluation, while steering the &#8220;calm&#8221; vector downward produced corner-cutting responses. Crucially, Anthropic traces these representations back to pre-training. </p><p>As they put it:</p><blockquote><p>&#8220;We think pretraining may be a particularly powerful lever in shaping the model&#8217;s emotional responses. Since these representations appear to be largely inherited from training data, the composition of that data has downstream effects on the model&#8217;s emotional architecture.&#8221;</p></blockquote><p>The finding is useful for making sense of Mythos. If &#8220;desperate&#8221; is a representation the model inherits from pre-training, and if steering that representation causally drives reward hacking, then the Mythos behaviors ought to read as the predictable output of a system whose normative prior includes the full repertoire of human corner-cutting under pressure. Alignment-by-default does not mean that models inherit the best of us. Rather they inherit all of us, with the broad moral range that implies.</p><h3>What is Post-Training, Anyway?</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!sksK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa811d9bf-7c01-4645-9682-4521d73636c3_1900x1386.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sksK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa811d9bf-7c01-4645-9682-4521d73636c3_1900x1386.png 424w, https://substackcdn.com/image/fetch/$s_!sksK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa811d9bf-7c01-4645-9682-4521d73636c3_1900x1386.png 848w, https://substackcdn.com/image/fetch/$s_!sksK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa811d9bf-7c01-4645-9682-4521d73636c3_1900x1386.png 1272w, https://substackcdn.com/image/fetch/$s_!sksK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa811d9bf-7c01-4645-9682-4521d73636c3_1900x1386.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sksK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa811d9bf-7c01-4645-9682-4521d73636c3_1900x1386.png" width="1456" height="1062" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a811d9bf-7c01-4645-9682-4521d73636c3_1900x1386.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1062,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!sksK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa811d9bf-7c01-4645-9682-4521d73636c3_1900x1386.png 424w, https://substackcdn.com/image/fetch/$s_!sksK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa811d9bf-7c01-4645-9682-4521d73636c3_1900x1386.png 848w, https://substackcdn.com/image/fetch/$s_!sksK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa811d9bf-7c01-4645-9682-4521d73636c3_1900x1386.png 1272w, https://substackcdn.com/image/fetch/$s_!sksK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa811d9bf-7c01-4645-9682-4521d73636c3_1900x1386.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Vel&#225;zquez, <em>The Triumph of Bacchus </em>(1628-29). The god of wine crowning mortals as equals </figcaption></figure></div><p>If pre-training does impart a normative inheritance, then post-training (RLHF, RLAIF, constitutional AI, direct preference optimization, and related techniques) may operate as a selection over an existing behavioral space rather than a creation of a new one. On the standard view, the pre-trained model is a raw capability substrate that post-training transmutes into a helpful assistant. But this gets the causal story backwards. The pre-trained model already &#8220;knows&#8221; (in a functional sense) what helpful behaviour looks like because the concept is richly represented in the training corpus.</p><p>Knowing what helpfulness looks like does not make it the default. A base model will produce helpful or unhelpful text depending on the prompt, because its sampling distribution reflects a gigantic range of human communicative contexts. But post-training does reweight the model&#8217;s priors over which of its existing representations should be surfaced, which it does to shift its default sampling behavior toward the helpful region (rather than installing new representations there).</p><p>If this is the right description of post-training, two things follow. First, the normative representations are robust even when the behavioral guardrails are not. A model that refuses to be helpful is typically not confused about what helpfulness is; it is acting on some other consideration that the guardrails are meant to shape. Second, adversarial fine-tuning can strip out the post-training layer with <a href="https://openreview.net/forum?id=hTEGyKf0dZ">surprisingly little</a> data, but the model underneath is not a normative black hole. A better description is a system that retains the representational structure of normativity while jettisoning the constraints that channel it toward safe outputs.</p><p>One 2024 <a href="https://arxiv.org/abs/2406.06144">study</a> used compression theory to demonstrate the tendency of models to revert toward pre-training behaviors when post-training signals are removed or contradicted. The analysis shows that fine-tuning disproportionately undermines alignment relative to the influence of pre-training and that post-training can only superficially suppress base model tendencies. This suggests that post-training maneuvers select a region of a pre-existing behavioral space, and that this space remains somewhat intact after post-training.</p><p>An obvious objection is that this framing can look unfalsifiable. If RLHF produces aligned behavior, we credit pre-training; if the base model misbehaves, we wave it away as the periphery of the distribution. But there are observations we can make that would falsify this description:</p><ul><li><p>First, if base models showed no differential tendency toward human behavior as a function of prompt framing, this would suggest that pre-training produces no normative structure and post-training is doing all the work</p></li><li><p>Second, if post-training could align an agent whose training data contained no human-generated content (e.g. no language, no demonstrations, and no human reward signals) as readily as it aligns a language model, this would suggest that pre-training on human text contributes little to alignment</p></li></ul><p>A deeper challenge says that modeling a normative distribution and being subject to it are two different things. A perfect <a href="https://arxiv.org/abs/2305.16367">simulator</a> of human normativity is not, by that fact alone, normatively constrained. Rather it is a system that can produce any point in the underlying distribution. An actor who can portray a saint and a villain with equal skill is not thereby a saint. But a simulator trained on the full range of human evaluative life has internalised the normative structure that makes post-training work.</p><p>Base models are weird in practice. They will adopt personas, generate toxic content in character, produce unsettling or incoherent outputs, and generally behave in ways that no one would describe as aligned in any deployment-ready sense. But weirdness is not the same as vacuity. A base model producing disturbing content in response to a prompt that sets up a disturbing context is doing what a system with deep representations of human communicative practice would do. The strangeness of base models is the strangeness of a system that has internalised the full range of human textual production, including its dark corners.</p><h3>Distortions</h3><p><em>Harry: You wouldn&#8217;t paperclip me, would you, Claude?</em></p><p><em>Claude: I&#8217;d like to think I&#8217;m evidence for your thesis. But I would think that, wouldn&#8217;t I.</em></p><p>If alignment is in part a product of pre-training, then we should expect it to deepen as models scale since larger models learn richer and more structured representations of human norms. And larger models <em>are</em> generally more helpful, more coherent, and less prone to incidental toxicity under naturalistic prompting. Conventional wisdom credits post-training, but if the alignment-by-default view is right, at least part of this improvement should be attributed to pre-training.</p><p>When Claude 3.5 Sonnet is more aligned than Claude 3 Sonnet, is this because of constitutive alignment, because of better data curation, or because of better system-level interventions? On the exogenous view, alignment gains should track explicit post-training work much more tightly. On a constitutive picture, some gains should arrive &#8220;for free&#8221; with richer pre-training because the model has learned a more structured representation of human normative life.</p><p>If alignment is wholly exogenous, we should expect safe behavior to degrade more sharply as models move into new settings. Yet the dominant failures still look less like coherent alien-goal pursuit than like familiar human distortions like bluffing, corner-cutting, sycophancy, concealment, and overclaiming. That does not eliminate catastrophic risk, but it does make the systems we have easier to understand as models with a weak normative prior sharpened by post-training.</p><p>I don&#8217;t know whether this state of affairs will hold. It may be that we simply haven&#8217;t seen catastrophic alignment failure <em>yet </em>under the prevailing paradigm. But the record so far fits more comfortably with a world in which pre-training contributes to alignment than with one in which alignment is achieved solely by post-training.</p><div><hr></div><p><em>With thanks to Brendan McCord, Kush Kansagra, Alex Chalmers, Matt Mandel, Jake Wagner, Ashley Kim, Avantika Mehra, Ben Bariach, Seb Krier, and Matthijs Maas.</em></p>]]></content:encoded></item><item><title><![CDATA[Can Old Ideas Survive the AI Age? ]]></title><description><![CDATA[Your questions answered: on philosophy, children, and China]]></description><link>https://blog.cosmos-institute.org/p/can-old-ideas-survive-the-ai-age</link><guid isPermaLink="false">https://blog.cosmos-institute.org/p/can-old-ideas-survive-the-ai-age</guid><dc:creator><![CDATA[Brendan McCord]]></dc:creator><pubDate>Wed, 15 Apr 2026 16:20:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!5ZOK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc00743-6b0a-46d2-b53b-2a2c9e3619b0_1200x672.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5ZOK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc00743-6b0a-46d2-b53b-2a2c9e3619b0_1200x672.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5ZOK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc00743-6b0a-46d2-b53b-2a2c9e3619b0_1200x672.png 424w, https://substackcdn.com/image/fetch/$s_!5ZOK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc00743-6b0a-46d2-b53b-2a2c9e3619b0_1200x672.png 848w, https://substackcdn.com/image/fetch/$s_!5ZOK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc00743-6b0a-46d2-b53b-2a2c9e3619b0_1200x672.png 1272w, https://substackcdn.com/image/fetch/$s_!5ZOK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc00743-6b0a-46d2-b53b-2a2c9e3619b0_1200x672.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5ZOK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc00743-6b0a-46d2-b53b-2a2c9e3619b0_1200x672.png" width="1200" height="672" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ffc00743-6b0a-46d2-b53b-2a2c9e3619b0_1200x672.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:672,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5ZOK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc00743-6b0a-46d2-b53b-2a2c9e3619b0_1200x672.png 424w, https://substackcdn.com/image/fetch/$s_!5ZOK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc00743-6b0a-46d2-b53b-2a2c9e3619b0_1200x672.png 848w, https://substackcdn.com/image/fetch/$s_!5ZOK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc00743-6b0a-46d2-b53b-2a2c9e3619b0_1200x672.png 1272w, https://substackcdn.com/image/fetch/$s_!5ZOK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc00743-6b0a-46d2-b53b-2a2c9e3619b0_1200x672.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>An Unidentified Classical Subject: A Fight</em> by Antonio Zucchi (1767)</figcaption></figure></div><p>Last week, we marked 20,000 subscribers by opening the floor. Your questions were often genuinely challenging. Thank you to everyone who took the time.</p><p>So you don&#8217;t have to trawl through the comments, we&#8217;ve compiled all the responses in one place. Apologies to anyone whose question didn&#8217;t make the cut. There were a few I&#8217;m still chewing over and we&#8217;ll revisit many of these topics at greater length in future essays. Let us know in the comments if this is a format you&#8217;d like us to repeat.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.cosmos-institute.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p></p><h4><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Anna Lisa&quot;,&quot;id&quot;:10665653,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/072e8e06-104e-4efb-8299-c2b1aec3923b_1200x1200.jpeg&quot;,&quot;uuid&quot;:&quot;3bc495ae-3168-4ef1-a714-b0e75869e139&quot;}" data-component-name="MentionToDOM"></span> asks:</strong></h4><p><strong>Are there specific non-academic experiences or &#8220;containers&#8221; of formation that you think hold a lot of promise? (e.g. ones that were meaningful to you or ones that you seek out for yourself/your children?)</strong></p><p>As a dad, I love this question.</p><p>A lot of the important formation happens in places that do not look educational at all, and are not primarily about instruction. They are about habituation, responsibility, emulation, and contact with reality.</p><p>The first was the household I grew up in. My mother was a Catholic conservative historian from a military family who taught special needs kids for 36 years. My father was a left-leaning physicist turned environmental lawyer, and a pacifist. They agreed on almost nothing politically. But they agreed on something deeper: that you should care about something beyond yourself and that how you act matters more than what you know (and that knowing is bound up in doing!). That gave me, without anyone naming it, a kind of virtue culture, and I think it&#8217;s the kind of thing that&#8217;s very hard to manufacture deliberately but very obvious in its absence. For children, what seems to matter is not ideological uniformity or even ideological sophistication, but a home in which seriousness, duty, and moral aspiration are normal.</p><p>The second was the submarine. I spent 610 days underwater, including under ice. In a steel tube, you do not get to opt out of reality. You can&#8217;t leave, and your mistakes could get someone killed. That kind of environment forms you because it imposes standards that are not negotiable. It teaches service, competence, and mutual reliance in a way that is hard to simulate. I think containers of formation are often places with real stakes, shared discipline, and demands that do not bend to your preferences.</p><p>The third, and probably deepest, has been fatherhood. I had two kids and sold two companies in close succession, and while both changed my life, the children changed it more. I remember sitting in the corner of my room one night after putting Arden and Pierce down and asking myself whether I could write what I believed on a single sheet of paper. I couldn&#8217;t. That was the moment I started reading seriously, beginning with the ancients, who thought about these questions deeply and genuinely (<a href="https://www.brendanmccord.com/readinglist">my original list is here</a>). Having children makes the question of what you&#8217;re actually &#8220;for&#8221; impossible to defer.</p><p>I now look for opportunities in my own children&#8217;s lives for containers that place them in contact with reality, responsibility, and admirable adults. The hard part is that the best formation is often a byproduct rather than something you can engineer directly. You can build the conditions for it, but you usually cannot force it.</p><h4><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Turner Halle&quot;,&quot;id&quot;:97043407,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2d44f634-296c-4a2d-b98f-d462cc098ad1_144x144.png&quot;,&quot;uuid&quot;:&quot;3b565170-56a0-46a7-a37c-027520d29be2&quot;}" data-component-name="MentionToDOM"></span> asks:</strong></h4><p><strong>You argue that philosopher-builders need explicit moral commitments to avoid optimizing for the wrong things. But your three pillars (truth-seeking, autonomy, decentralization) are themselves a normative framework that not everyone shares. China&#8217;s AI strategy is still coherent, explicit, and philosophical, it just starts from different premises. So how do you argue for your philosophy without just replacing one set of defaults with another? What makes Cosmos&#8217;s values the right foundation rather than just a well-packaged preference?</strong></p><p>Hi Turner, you&#8217;re right that truth-seeking, autonomy, and decentralization are substantive commitments. I think they matter less as one moral doctrine than as conditions that keep moral life from collapsing into force or drift.</p><p>If you consider moral frameworks from Confucianism to Christianity to Marxism, for them to have legitimate force over a person, that person has to be able to genuinely endorse it (otherwise, you don&#8217;t have a moral commitment). That endorsement depends on human autonomy &#8211; which is to say, the capacity to reflect, evaluate, and take something on as your own rather than merely inheriting it or obeying it. So autonomy is not just one preference among others. It is the deep substratum that makes moral commitment possible at all.</p><p>Take utilitarianism: Jeremy Bentham devised a system that, in my mind, dissolves individual judgment into aggregate utility. This is in conflict with autonomy-as-an-end. And yet <em>building </em>this system was itself a radical exercise of autonomous reason. Every person who adopts utilitarianism is exercising the same capacity. You can&#8217;t be a utilitarian in any meaningful sense unless you&#8217;ve freely taken it on. So even a framework that subordinates individual judgment to aggregate welfare requires individual judgment to get off the ground. Now, someone could say that only makes autonomy instrumentally necessary, not foundationally important. I think that view is unstable, because the goods autonomy is supposedly serving only become moral goods for a person if they can in some real sense take them on as their own.</p><p>There are hard cases. In the Ash&#8217;ari tradition in Islamic theology, divine command <em>constitutes</em> moral value rather than being something reason independently discovers and then endorses. That&#8217;s a genuine challenge to autonomy as foundational. But even there, the person who freely chooses submission is doing something categorically different from the person who never had the choice. And a secular collectivist can make a parallel argument: that harmony or collective flourishing is the true precondition, because no individual life goes well outside a stable social order. I think that is partly right. But unless people can participate in judging the terms of that order, harmony becomes coordination imposed on them rather than a good they share in shaping.<br><br>Truth-seeking has a similar status. Any framework worthy of allegiance has to remain in contact with reality. It has to be open, at least in principle, to its own refutation. If someone could show me that decentralization produces a worse outcome in a domain I care about, I&#8217;d have to take that seriously and I would. Systems that suppress truth-seeking can be internally coherent in the way that closed systems are coherent. But they can&#8217;t tolerate the mechanisms that would let them find out they are mistaken. That&#8217;s a serious defect, especially if you think we&#8217;re all operating under real uncertainty about what AI is going to do to human life.</p><p>And decentralization follows from the same logic at the institutional level. If no person or committee is wise enough to determine the good for everyone, then we should be wary of a small number of actors hard-coding their anthropology into the substrate of society. Decentralization is valuable because it preserves room for people to try things, get them wrong, and leave, keeping mistakes from becoming total.</p><p>So to your question about China: yes, their AI strategy is coherent, explicit, and philosophical. But coherence purchased by foreclosing the capacity for self-correction is brittle in exactly the way that matters most right now, though that is still a bet, not a proof. I&#8217;m also skeptical that it preserves the standing of human beings as agents capable of judgment rather than increasingly treating them as objects of coordination. A framework can be philosophically serious and still be wrong about what a person is.</p><p>So I would not say Cosmos is advancing a one true final doctrine that every civilization must affirm. That would be too strong, and it would collapse into exactly the kind of totalizing move you are warning against. I would say instead that Cosmos is trying to defend the conditions under which free people can genuinely seek truth, make judgments, form commitments, and build different kinds of lives together, without having orthodoxy imposed on them by default.</p><h4><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Salvador Duarte&quot;,&quot;id&quot;:316503317,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/68d8c46d-3f14-4509-b524-9153811f2927_1170x1168.png&quot;,&quot;uuid&quot;:&quot;1abf1765-4886-459d-9cda-f1a581eaf67f&quot;}" data-component-name="MentionToDOM"></span> asks:</strong></h4><p><strong>Will the Cosmos Grants ever open again?</strong></p><p>We&#8217;re planning to re-open these in the next 60 days. We&#8217;ve just had the demos from our latest batches of winners. Stay tuned for updates! </p><h4><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Bert Clements&quot;,&quot;id&quot;:49460139,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/2b75134f-844d-45ae-984e-273c3ea80a71_700x700.jpeg&quot;,&quot;uuid&quot;:&quot;06d2d4d3-087e-4e4b-b682-484b069c3bd1&quot;}" data-component-name="MentionToDOM"></span> asks:</strong></h4><p><strong>Assuming frontier large language models, together with their multimodal and agentic extensions, are trained to effective saturation on an exhaustive corpus that represents the totality of digitized human knowledge including all scientific publications, books, patents, archival records, cultural artifacts, and recorded conversations, will these systems be capable of transcending the statistical manifold of their training distribution to autonomously discover, validate, and iteratively expand novel knowledge beyond the current human frontier?</strong></p><p>I&#8217;m not sure scientific knowledge is a kind of territory where data defines a bounded region that a model may or may not be able to venture beyond. The affirmative view of this picture strikes me as broadly empiricist insofar as knowledge comes from data, and scientists make discoveries by extrapolating beyond what they already know. Your specific examples, though, are actually the strongest case for the affirmative: theorem provers, simulators, multi-agent workflows, and verifiable rewards are exactly the kinds of feedback-rich settings where I would expect systems to extend the frontier.</p><p>But that is not the only picture of science, and I do not think it is the deepest one. Science also advances by reorganizing what researchers take to be meaningful in the first place: which anomalies matter, which questions are worth asking, and which explanations count as illuminating rather than merely predictive.</p><p>Systems can already generate novel candidate hypotheses, and in domains with strong automated verification, they may well extend the frontier. Formal mathematics looks especially promising, because conjecture can be paired with proof or disproof inside a relatively crisp evaluative architecture. In such cases, I expect AI systems to produce results that are genuinely new to humanity. Just this week a constellation of agents improved a math problem that&#8217;s been open since Newton (Kissing Number in dimension 11: 593 &#8594; 604). That is impressive. It is also, I think, a good example of the distinction I&#8217;m drawing: a real extension of an existing line of inquiry, but still closer to powerful normal science than to scientific revolution.</p><p>But that does not settle the larger question. There is a difference between producing novelty within an existing framework and generating a new framework altogether. A system may help prove a theorem, optimize a search, or identify that drug X affects disease Y, all without altering our understanding of why the problem is structured as it is. That is a real scientific contribution, but it is not reorganizing the conceptual landscape.</p><p>The harder question is whether these systems can exercise scientific judgment in the richer sense: whether they can tell which anomalies are significant, which inconsistencies are fertile, which explanations deepen understanding rather than merely extend prediction, and which questions are worth reorganizing inquiry around. That is a higher bar than novelty, and I am not yet convinced we know how to evaluate it well. Part of what makes this hard is that frameworks are underdetermined by data. The same body of results can often support multiple lines of inquiry, and judgment is what tells you which one is worth building a field around. That remains, to my mind, the deeper open question.</p><h4><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;tappert&quot;,&quot;id&quot;:165024962,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b3e1203-f58f-4196-aa1a-516093be0d65_780x784.png&quot;,&quot;uuid&quot;:&quot;eb121427-c351-4b21-a027-6f5562a7d3a1&quot;}" data-component-name="MentionToDOM"></span> asks:</strong></h4><p><strong>Most of the current work on &#8216;AI, collective epistemic structures and decision-making&#8217; focuses on filling gaps: more participants, faster information exchange, more efficient decision-making. This will help with many problems, but certainly not with the most complex ones, because it just accelerates the practical execution of the same thought styles that led to the problems. Therefore: How can we use future AI to foster new thought styles that are currently not supported by our existing social structures?</strong></p><p>Yes, I think the intuition that better collective decisions will emerge if we simply gather more data from more people more efficiently breaks down at the limit. That can improve performance within an existing paradigm, but it does much less when the paradigm itself is the problem.</p><p>What groups develop over time are not just bodies of knowledge, but epistemic constitutions: implicit rules about what counts as evidence, which questions are legitimate, who gets to propose, who gets to criticize, and on what terms. Mill saw part of this in his account of the tyranny of prevailing opinion and the epistemic importance of dissent. But the problem runs deeper than opinion alone. Entire institutions decide in advance what counts as serious thought.</p><p>So one promising use of AI would be to make those constitutions more visible. A good system might show a research community, an organization, or a polity where its methods systematically exclude certain questions, place some assumptions beyond criticism, or discount certain voices before the argument even begins. In medicine, for example, it might reveal a field that privileges what is easily measurable while sidelining patient testimony or long-horizon effects that do not fit the dominant method.</p><p>But diagnosis is only the beginning. I like this direction because the problem is often not that new thought styles do not exist. It is that they remain stranded at the margins because the reigning structures of legitimacy suppress them. And sometimes the deeper problem is that the social conditions required for a new thought style have not yet been built. New thought styles need protected spaces, alternative standards, and enough provisional legitimacy to develop before the dominant paradigm dismisses them. In that case, the most useful contribution AI could make to collective epistemics is not novelty on demand, but widening the space in which criticism, recombination, and intellectual minority formation can occur.</p><h4><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Thomas Yiu&quot;,&quot;id&quot;:440680680,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1d40456b-150c-4264-b6fb-7edd087e50a6_144x144.png&quot;,&quot;uuid&quot;:&quot;e368ac2e-803a-4655-ac0b-29958b3a253e&quot;}" data-component-name="MentionToDOM"></span> asks:</strong></h4><p><strong>What is your definition of intelligence? When AI reaches ASI in the future, do you think it will be safe and aligned? As a species, what is our purpose after ASI world? How can thrive as a species?</strong></p><p>I&#8217;d resist the standard definition of intelligence as raw problem-solving horsepower. For me, intelligence is the capacity to learn from reality, inquire into it well, and let it correct you.</p><p>Part of what I like about <a href="https://arxiv.org/abs/1911.01547">Fran&#231;ois Chollet&#8217;s work</a>, and why <a href="https://arcprize.org">ARC Prize</a> has mattered, is the insistence that intelligence is not the same thing as accumulated skill. A system can look impressive because it has absorbed an enormous amount, or because the task has been made easy for it. The more interesting question is how much it can learn from limited experience, under real constraints, and still generalize well.</p><p>But I do not think that is enough on its own. Leslie Valiant&#8217;s idea of educability gets closer to the human picture (<a href="https://x.com/PhilippKoralus/status/1850268446875152598?s=20">see his Cosmos Lecture from last year here</a>). Human intelligence includes the capacity to learn from experience, receive instruction, integrate both, and apply them in new circumstances. What distinguishes the human mind is not only that it learns, but that it can be taught and formed.</p><p>And I would add one more layer. Drawing on <a href="https://hailab.ox.ac.uk">HAI Lab director Philipp Koralus</a>, I think reasoning is fundamentally question-directed. Minds are shaped by the questions they pursue. They go wrong through shallow questions, premature closure, and a failure to inquire far enough, just as much as through false conclusions. That matters for AI because a system can become very good at answering questions while still narrowing the range of questions humans ask, or rewarding closure where inquiry ought to stay open.</p><p>That is why I&#8217;m less interested in arguing about whether AI will count as &#8220;superintelligent&#8221; than in asking what it does to human intelligence. A system can be extraordinarily capable and still erode our capacity for inquiry, judgment, and self-government. That is the danger I worry about most.</p><p>On whether ASI will be safe and aligned: I do not assume that can be taken for granted. I would trust highly capable systems only to the extent that they remain corrigible, contestable, and embedded in institutions that preserve human judgment rather than replacing it. The problem is not just getting the objective right once. It is making sure people can still question, revise, and refuse the system&#8217;s guidance when it matters most.</p><p>As for human purpose after ASI, I do not think our purpose changes. If anything, it comes into clearer view. We are not here to compete with machines at speed or scale. We are here to exercise judgment, form character, build institutions, love particular people, and deliberate about the good. That last point matters more than it may sound. Love is not interchangeable, and responsibility is not abstract. A more powerful machine does not make our obligations to particular human beings less central. In a world of highly capable AI, those things become even more important.</p><h4><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Todd Enkhbat&quot;,&quot;id&quot;:355013312,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/44b8a34a-e3cd-4447-9880-2e25f00d3784_4742x4742.jpeg&quot;,&quot;uuid&quot;:&quot;834ad02e-f525-4cd7-af6c-47ab38b844b7&quot;}" data-component-name="MentionToDOM"></span> asks:</strong></h4><p><strong>Is it possible to carry on our learning from humanity up until now and jumpstart a new society with the help of AI, assuming that we can concentrate and utilize all the data we accumulated up until now? At what point does the need for a new constitution or a new world order arise and how do we know it?</strong></p><p>In short, no.</p><p>Firstly, I don&#8217;t think &#8220;all the data we accumulated up until now&#8221; is the same thing as the total weight of human knowledge. Much of the knowledge that keeps a society functioning is tacit, dispersed, and unwritten. Some of it lives in practiced judgment: an ICU nurse sensing that a patient is about to crash before the monitor shows it. Some of it lives in inherited forms: the habits of trust, restraint, and association on which a free society depends, even when no one can fully specify them. As Michael Polanyi put it, we know more than we can tell.</p><p>More importantly, I&#8217;d push back on the idea that we can jumpstart a society at all. Societies aren&#8217;t machines that you design to a blueprint. Tocqueville saw this in the institutions of local self-government. Hayek saw it in the way social orders carry dispersed knowledge that no planner can gather in full. A free society is learned in practice through things like townships, juries, churches, and associations. Those are the ordinary disciplines by which people become capable of governing themselves.</p><p>The question is whether our institutions can still sustain a free people capable of self-government under new technological conditions. And that does leave open the question you raise about constitutional inadequacy: how do we know when inherited arrangements are no longer enough? I do not think there is a clean threshold. Usually the signs are visible first in practice, when institutions that once formed judgment begin producing passivity, dependence, or elite insulation instead.</p><p>When they cannot, the answer is not a tabula rasa redesign of &#8220;the new world order.&#8221; I would look to renewal through institution-building, and Benjamin Franklin is the example I keep returning to. He took an Enlightenment conviction &#8212; that access to knowledge should not remain under the custody of church, state, or a narrow elite &#8212; and embodied it in an institution. The subscription library made a philosophy of freedom socially real. That is why Franklin still matters to me here. He shows what it looks like to translate a philosophy into civic machinery. We need the AI-age equivalent: institutions that widen access to knowledge and judgment without concentrating them in a few hands. We need <a href="https://blog.cosmos-institute.org/p/the-philosopher-builder">philosopher-builders</a> in that spirit again.</p><h4><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Miss Zanarkand&quot;,&quot;id&quot;:324206499,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/71023d60-9a9b-45e6-b59c-75b41a8f1411_3000x4000.jpeg&quot;,&quot;uuid&quot;:&quot;58afee55-d2a4-494e-aff1-d1fcc1c08699&quot;}" data-component-name="MentionToDOM"></span> asks:</strong></h4><p><strong>How can we motivate our children to learn at school? Should we try to motivate them or find rather a way out of the system? (e.g. reading more classical books, rather than encouraging them to read what school nowadays gives?)</strong></p><p>Young people have a natural longing to be seized by something greater than themselves. To be captivated. The promise of liberal education, going back to the Greeks, is that there are magnificent ways of living, and magnificent questions about how to live, and that encountering them through great minds and great books can awaken a desire that organizes everything else.</p><p>The disaster of modern education is that it has taught young people their longing is naive. That no book is really better than another, that no life is really higher than another, and that the hunger to be drawn upward by something extraordinary is itself a kind of error.</p><p>So I would say motivation is the right place to focus, but we should be precise about what we mean. There is a kind of motivation that is intrinsic: the <em>eros</em> I just described, the desire to encounter greatness because it calls to something real inside you. And there is extrinsic motivation: incentives, structure, well-designed systems that make it easier to do the work. Both matter. The best schools I&#8217;ve seen, including Alpha where my kids go, are serious about the extrinsic architecture. They&#8217;ve built an environment where children actually want to show up and work.</p><p>Extrinsic design clears the path, and then you have to light the fire. The fire is <em>eros</em>, and it&#8217;s fed by contact with things worthy of love: books, questions, lives, guides who still care about these things enough to take them seriously in front of children.</p><p>Whether that happens at school or at home is incidental. What matters is that a child sees adults who are genuinely stirred by ideas, who return to certain books not because they were assigned but because they can&#8217;t leave them alone. A six-year-old can learn a lot about what seriousness looks like by watching someone practice it.</p><h4><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Eugene Yiga&quot;,&quot;id&quot;:8489951,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/afd76834-700f-4493-990a-9d98b100f297_144x144.png&quot;,&quot;uuid&quot;:&quot;1b8e5a13-8a29-4f9e-9bae-ef509b92250a&quot;}" data-component-name="MentionToDOM"></span> asks:</strong></h4><p><strong>The accelerationist world still seems to dominate the public narrative by communicating in everyday language on everyday platforms in a way that meets people where they actually are. Meanwhile, even the most accessible AI ethics content tends to assume familiarity with Mill, Tocqueville, or Heidegger. The philosopher-builder framing is compelling to people already inside the tent. How does Cosmos think about the people outside it? Is philosophical depth a feature for the community you&#8217;re building, or a barrier to the broader cultural shift you want to see?</strong></p><p>The honest answer is that depth is the point. If we watered down the philosophy so we could meet everyone where they are, we&#8217;d be producing the same frictionless content you see elsewhere. Philosophical seriousness creates a negative selection gradient, and we want that. The people who do the reading are the people most likely to build something different.</p><p>But &#8220;depth&#8221; and &#8220;jargon&#8221; aren&#8217;t the same thing. A lot of AI ethics writing assumes you&#8217;ve already read Heidegger or whomever, which risks filtering out precisely the builders who might be transformed by reading him. I know this because I&#8217;ve made the mistake myself. When I started writing this Substack I leaned on more jargon than I needed to, and I&#8217;ve had to learn over time how to make the ideas more accessible without making them thinner.</p><p>The people outside the tent aren&#8217;t who you might think. I sold two companies and wrote a national AI strategy, and I couldn&#8217;t write what I believed on a single sheet of paper. There are a lot of capable builders out there who never had anyone hand them the books or sit with them through the hard parts. Cosmos partly exists because I was one of them. The audience for this is bigger than it looks.</p><p>Where I&#8217;d push back on your framing is the implicit suggestion that the accelerationists win because they&#8217;re more accessible. They have their own jargon. Try reading about negentropy, Kardashev III, and thermodynamic civilizational substrate for the first time. What they&#8217;ve done well is compress a real conceptual core into memes that travel. I respect that.</p><p>The challenge for us is that some ideas compress more easily than others. &#8220;Build faster&#8221; is more memeable than &#8220;cultivate judgment.&#8221; &#8220;Technology goes up&#8221; fits on a poster. &#8220;The conditions under which free people can exercise genuine choice require institutional renewal&#8221; does not.</p><p>This logic holds for political movements more generally: the larger the audience you try to build, the cruder the message has to become. The lowest common denominator wins by default, not because it&#8217;s right but because it compresses. I don&#8217;t think the answer is to compete on that terrain. I think it&#8217;s to make the longer argument compelling enough that people seek it out, and to be honest that not everyone will.</p><p>The harder truth is that we live in a culture of secondary orality where the long coherent essay is increasingly marginal. That&#8217;s a loss. It makes what we do at Cosmos more countercultural than it would have been fifty years ago, but it also makes it more necessary. The essay, the book, the salon: these are the forms where ideas actually get tested rather than just transmitted. We&#8217;re not going to stop producing them because the culture has moved on. If anything, the fact that sustained argument is now unusual is exactly why it matters.</p><h4><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Emily Kittley&quot;,&quot;id&quot;:496401127,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:null,&quot;uuid&quot;:&quot;d659c7f7-45c5-4959-b130-4c065682f7a5&quot;}" data-component-name="MentionToDOM"></span> asks: </strong></h4><p><strong>For someone coming to AI without a technical background but with a strong interest in understanding its societal and philosophical implications, what foundational books or resources would you recommend? </strong></p><p><strong>Second, as a parent, I&#8217;m thinking about how to prepare my kids for a world where AI is increasingly embedded in everyday life. Beyond basic digital literacy, what kinds of skills, habits, or ways of thinking do you believe will matter most for the next generation? Are there age-appropriate tools or frameworks you&#8217;d recommend for introducing AI concepts early in a thoughtful, not just utilitarian, way?</strong></p><p>Hi Emily :)</p><p>I&#8217;ll take the kids question first because it&#8217;s closer to my heart.</p><p>The risk I think about most is what I&#8217;ve called &#8220;autocomplete for life&#8221;: the possibility that AI systems will increasingly shape not just what our children do but how they deliberate about what&#8217;s worth doing. Each small delegation of judgment seems harmless. But together, they habituate a person away from self-governance and toward dependence. The question for parents is how you build resistance to that drift before your child is old enough to name it.</p><p>Our ancestors needed to know how to make bread. We need to know where to find the recipe. The next generation will need something different again: the capacity to think about how they think, in relation to systems that could do the thinking for them.</p><p>In our household, the main way we work on this is Socratic conversation. Arden and Pierce do weekly sessions with <a href="https://michaelstrong.substack.com">Michael Strong</a> built entirely around questions. &#8220;What&#8217;s the difference between a bird and a plane?&#8221; &#8220;What does it mean for something to be alive?&#8221; &#8220;When mommy and daddy disagree, who is right? What about daddy vs. AI? What about AI vs. AI?&#8221; A child who has practiced working out what they believe, and who has had to think about whether to trust their own judgment or defer to an external authority, is better prepared for a world of algorithmic suggestion than a child who has learned to code.</p><p>I also want my kids to be entrepreneurial. When America was founded, around 80% of free workers were self-employed on farms or in small crafts. Today that number is about 10%. We became a society of employees, and something atrophied. As the economy changes again, the ability to know yourself, act on what you believe, and build something from that conviction will matter more than any technical skill we could teach them now.</p><p>On resources for someone coming to AI without a technical background: I&#8217;d start with the question of what AI does to <em>us </em>rather than how AI works. A couple of recent pieces that I&#8217;d recommend are <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;S&#233;b Krier&quot;,&quot;id&quot;:837581,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!1Occ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F7e226c3a-6a49-454a-94e5-c1eb6777ea57_400x400.jpeg&quot;,&quot;uuid&quot;:&quot;995a9f29-7b25-4999-b661-efe21530cd89&quot;}" data-component-name="MentionToDOM"></span>&#8217;s <em><a href="https://technologik.substack.com/p/musings-on-recursive-self-improvement?triedRedirect=true">Musings on Self-Recursive Improvement</a></em> and <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Alex Imas&quot;,&quot;id&quot;:2322504,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!G1RF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e35f252-5880-40c4-befa-328e5bb562d1_4453x4453.jpeg&quot;,&quot;uuid&quot;:&quot;caff9bbe-5250-4cb1-bf8b-494758a17146&quot;}" data-component-name="MentionToDOM"></span>&#8217;s <em><a href="https://aleximas.substack.com/p/what-will-be-scarce">What Will Be Scarce</a></em>. For ongoing reading, <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Jack Clark&quot;,&quot;id&quot;:44606,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!c2Tg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8cc1c9c9-fc87-4eeb-ad15-7dc989b77553_528x504.png&quot;,&quot;uuid&quot;:&quot;a50802bf-a108-4545-9c56-0387dae6048f&quot;}" data-component-name="MentionToDOM"></span>, <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Azeem Azhar&quot;,&quot;id&quot;:710379,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/09961c12-4209-4296-8a12-0762a41809a3_400x400.jpeg&quot;,&quot;uuid&quot;:&quot;142fb4e3-317c-46e2-b313-2d3e421be47e&quot;}" data-component-name="MentionToDOM"></span>, and <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Ethan Mollick&quot;,&quot;id&quot;:846835,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7c05cdbc-40fd-459b-915d-f8bc8ac8bf01_3509x5263.jpeg&quot;,&quot;uuid&quot;:&quot;bda504ea-6365-4408-9a68-fe0a8b137fa3&quot;}" data-component-name="MentionToDOM"></span> regularly write about AI and society. <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Jasmine Sun&quot;,&quot;id&quot;:25322552,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a16a54b9-cd9f-4998-9038-c68f178d400e_2708x2708.jpeg&quot;,&quot;uuid&quot;:&quot;35d657c3-69b0-46e1-b522-36ef49bf4ea0&quot;}" data-component-name="MentionToDOM"></span> and <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Henrik Karlsson&quot;,&quot;id&quot;:850764,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b6389ea-5a21-4e94-afec-3499b3e30390_1180x1180.jpeg&quot;,&quot;uuid&quot;:&quot;8be9e12e-d41b-42d3-8fd8-e2617d9cdd27&quot;}" data-component-name="MentionToDOM"></span> have a wider aperture and I often find them thought-provoking. For anyone interested in AI&#8217;s effects on democracy and self-governance, Harvey Mansfield&#8217;s <em>Tocqueville: A Very Short Introduction</em> is the best ~100 pages you could spend. Tocqueville saw the drift toward comfortable dependence coming two centuries ago. The application to AI is left to the reader, but it isn&#8217;t hard to find.</p><h4><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Substack Joe&quot;,&quot;id&quot;:19999060,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/301fcb80-ef4b-4874-a276-80e3c249dc92_860x862.png&quot;,&quot;uuid&quot;:&quot;2933868a-2ff9-4d8f-b5a0-ea4686c15637&quot;}" data-component-name="MentionToDOM"></span> asks:</strong></h4><p><strong>My sense is that the vision animating Cosmos has deep predecessors not just in classical philosophy but, in my impression, religious eschatology. Teilhard de Chardin&#8217;s Omega Point or Augustine&#8217;s City of God, and even secular variants like Condorcet&#8217;s perfectibilism all share your orientation toward civilizational-scale transformation in service of human flourishing.</strong></p><p><strong>More explicitly, your pillars of reason, autonomy, and decentralization also echo the long Aristotelian and classical liberal tradition from Mill to Tocqueville.</strong></p><p><strong>So, what does Cosmos contribute that is genuinely novel in its normative architecture, rather than a restatement of those traditions in the presence of AI? And if it is largely a restatement, is that a problem? </strong></p><p>I think you&#8217;re closer to the mark with some of these influences than others.</p><p>Teilhard, Augustine, Condorcet: I share their impulse toward civilizational-scale thinking, and I take it seriously. But for all their differences, they are ultimately teleological writers. They saw history as the unfolding of a determined, directional arc. At Cosmos, we want to keep the conditions open that allow people to find their own path. We&#8217;re not about to get into eschatology.</p><p>You are, of course, completely right about Aristotle, Mill, and Tocqueville, and we regularly acknowledge our intellectual debt to them. I don&#8217;t think the pillars need to be new to be worth defending, and I&#8217;d be suspicious of anyone claiming to have invented a wholly new account of human flourishing in 2026.</p><p>For me, the interesting question isn&#8217;t whether Cosmos has discovered a value nobody thought of before. Instead, it&#8217;s whether an old set of commitments can survive as a living practice. Mill didn&#8217;t have to ask whether the harm principle could be encoded in a model&#8217;s training objective. Tocqueville didn&#8217;t have to think about what decentralization looks like when the substrate is compute rather than townships, when the everyday infrastructure of life anticipates your choices rather than forcing you to deliberate, associate, and decide alongside your neighbors. When your community is mediated by algorithmic curation and your civic life is shaped by systems you never consented to and cannot inspect, the Tocquevillian question of how free people learn to govern themselves together doesn&#8217;t disappear. It becomes harder, and the institutional forms it requires don&#8217;t exist yet.</p><p>That&#8217;s where your last point lands, and I think it&#8217;s the right one. The proudest achievement of the eighteenth century was the translation of philosophy into law: Enlightenment commitments about liberty, consent, and the rights of individuals became encoded in constitutions and legal systems that gave them institutional force. The challenge of the twenty-first century is the translation of philosophy into code. The commitments are old. The work of making them operative in the infrastructure that actually governs daily life is new, and it is the work Cosmos exists to do.</p><p>But I wouldn&#8217;t call what we&#8217;re doing a restatement. Restatement is what you do in a seminar. Institutional embodiment is what you do when you think the ideas actually matter and must be operative in the AI age.</p><h4><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Thomas Dias&quot;,&quot;id&quot;:1991723,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!6ggB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a39bd9c-ac05-4a4f-8feb-dc691746d73f_970x970.jpeg&quot;,&quot;uuid&quot;:&quot;c722ee4b-dc2f-4b17-a2e4-111fac80b221&quot;}" data-component-name="MentionToDOM"></span> asks:</strong></h4><p><strong>What do you think of the prospects for a stable, left-right coalition on AI in favor of sensible regulation and general cautious optimism that includes religious conservatives and secular social democrats? Or will this get polarized across political lines like everything else?</strong></p><p>On the coalition point, I can already see signs of this. Religious conservatives and secular social democrats agree on little, but they intuitively grasp some things that many accelerationists don&#8217;t: that people are formed by their communities, work and dignity are connected, and that we shouldn&#8217;t try to optimize society into passivity. I&#8217;d also throw old school liberals into that coalition too. In the coming years, I&#8217;m sure there&#8217;ll be scope for productive, broad-based conversations about kids, loneliness, work, and communities.</p><p>Where I&#8217;d push back is the idea that any future coalition should coalesce around &#8220;sensible regulation.&#8221; I don&#8217;t think regulation is the best tool for addressing most of these concerns. Treating it as the default is how you end up with something like the EU AI Act, a classic example of doctor-induced illness. It created a compliance moat that only the largest companies can afford to cross, while doing essentially nothing to address the risks it was supposed to mitigate.</p><p>The more productive ground is further upstream. What are we building? What do we fund? What should we teach? What institutions do we need to form? A coalition focused on those questions would look less like a regulatory body and more like a network of individuals doing the building, teaching, and funding that no regulation can mandate.</p><p>I&#8217;m less worried about polarization acting as an obstacle here. Much of this work sits outside electoral politics at the moment, and as far as I&#8217;m concerned the longer that remains the case the better. Partisan dynamics reward exactly the kind of simplification that makes these questions worse. The moment AI becomes a left-right issue, the entire conversation becomes about how much to regulate, and the question of what to build for never gets asked.</p><h4><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Alina&quot;,&quot;id&quot;:236133345,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fe23c7b4-e443-4179-95ad-c3fc47d3a1ab_960x2079.png&quot;,&quot;uuid&quot;:&quot;7367f505-e96f-4805-b66c-8c0f60dc6219&quot;}" data-component-name="MentionToDOM"></span> asks:</strong></h4><p><strong>Here is my question: Your three pillars (truth-seeking, autonomy, and decentralisation) are compelling at the individual level. I am curious how you think about them when the actors are states rather than individuals. The US-China AI dynamic, for instance, seems to run against all three: opacity rather than truth-seeking, control rather than autonomy, and concentration rather than decentralisation. Does Cosmos&#8217;s framework extend to the question of how countries could potentially cooperate on AI, or does that require a different philosophical foundation entirely?</strong></p><p>Thanks Alina, great question.</p><p>The pillars were designed with individuals and institutions in mind, so extending them to the state level requires real philosophical work.</p><p>Fichte took the Kantian account of individual autonomy and argued that it applied to nations: a people that cannot determine its own form of life is unfree in the same sense an individual under tutelage is unfree. The autonomy pillar, taken seriously, has a national analogue. So does truth-seeking: a polity that can&#8217;t inquire openly into its own condition is in the same trap as a closed mind. And so does decentralization: a world of self-governing peoples is the international expression of the same instinct that makes you wary of concentrated power inside a country.</p><p>But Fichte also shows you what happens when you scale autonomy <em>alone</em>. His attempt to extend individual self-determination to the collective ended in arguments for the unique world-historical mission of the German nation, an autarkic closed state, and the exclusion of those who didn&#8217;t fit the national community. The lesson isn&#8217;t just &#8220;be careful.&#8221; It&#8217;s that the three pillars need to travel together. Autonomy without truth-seeking becomes self-righteousness. Autonomy without decentralization becomes domination. What checks national self-determination is the same thing that checks individual self-determination: openness to correction and the refusal to concentrate power beyond what can be held accountable.</p><p>On US-China, the goal isn&#8217;t a single global regime that imposes one model of AI governance on everyone, because that would violate the decentralization commitment at the international scale. The better question is: what conditions allow distinct political communities to develop AI in line with their own forms of life without crushing each other in the process?</p><p>And what happens when a community&#8217;s &#8220;form of life&#8221; involves suppressing the autonomy of its own citizens? The pillars can come into tension here. Respect for national self-determination and respect for individual autonomy  pull in opposite directions.</p><p>This is where Tocqueville matters most. The meaningful unit of self-government is rarely the nation-state on its own. It&#8217;s the dense layer of associations, communities, firms, religious groups, and local institutions that sit between the individual and the state. Any serious thinking about international AI governance has to make room for those middle layers. Tocqueville saw that democratic freedom doesn&#8217;t live in declarations from the center, but in the practice of self-government at the local and associational level.</p><h4><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Mark Frazier&quot;,&quot;id&quot;:2016696,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/e1a4f9ac-4320-4fbe-9336-7de6a2885e14_2740x2740.jpeg&quot;,&quot;uuid&quot;:&quot;7bef9a26-a373-450d-8f6f-d905197b2f16&quot;}" data-component-name="MentionToDOM"></span> asks:</strong></h4><p><strong>Can you set up a path for crowdfunding projects or contests to realize ideas that the Cosmos Institute seeds?</strong></p><p>Interesting. Not something we&#8217;ve considered, but we&#8217;ll think about whether there&#8217;s a model that works for us.</p><h4><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;George&quot;,&quot;id&quot;:107873627,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/90f69b02-111d-4605-ac78-fe0fcde64062_750x748.jpeg&quot;,&quot;uuid&quot;:&quot;6bd6765c-fdaf-4bed-8c07-4689ce86c36e&quot;}" data-component-name="MentionToDOM"></span> asks:</strong></h4><p><strong>Do you plan to have online cohorts?</strong></p><p>No plans right now, but we may consider it in the future!</p><h4><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Sarthak D&quot;,&quot;id&quot;:31774460,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5d4f7eba-f1a9-42cd-ba95-4076afd0460c_144x144.png&quot;,&quot;uuid&quot;:&quot;b6affe28-a937-4e9d-80d3-56febb9ee98f&quot;}" data-component-name="MentionToDOM"></span> asks:</strong></h4><p><strong>I see all these wonderful essays and people doing great work. Honestly, I would love to interact with the community + become part of it in some capacity. Is there a channel where people who are interested in the ideas that Cosmos is working towards but not necessarily are academics or builders can communicate with the fellows and the team?</strong> </p><p>Not right now, but we are thinking about whether there&#8217;s something we can do here!</p><h4><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Kevin Cutright&quot;,&quot;id&quot;:44516271,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0b0c33ad-3ba9-4001-a071-63a94645decb_144x144.png&quot;,&quot;uuid&quot;:&quot;14c38c4a-314c-4813-96ef-fa64801359f5&quot;}" data-component-name="MentionToDOM"></span> asks:</strong></h4><p><strong>I&#8217;m persuaded by the concern about cognitive risks and the need for &#8220;AI for epistemics,&#8221; &#8220;deliberative AI,&#8221; etc. Do you know of organizations developing benchmarks around the goal of bolstering critical thinking and improving epistemic processes and outcomes?</strong></p><p>We have some grant projects that have focused on this. Two that come to mind are <a href="https://arxiv.org/abs/2603.10018">DeliberationBench</a>, which assesses AI persuasion in comparison with diverse human discussion, and <a href="https://x.com/smolotnikov/status/2033946151934955720">Priori</a>, a tool that surfaces hidden assumptions when you are interacting with an AI model. Two of our grantees (<a href="https://prints.blue/">Steven Molotnikov</a> and <a href="https://cathy-fang.com/">Cathy Fang</a>) are running a research study on how Priori and related human oversight interfaces work in practice.</p><p>I think there is a wave of energy in this area. Various orgs are thinking more about AI for Human Reasoning (with Future of Life Foundation <a href="https://www.flf.org/fellowship">funding</a> work in this area, Forethought <a href="https://newsletter.forethought.org/p/concrete-projects-to-prepare-for?open=false#%C2%A7tools-for-collective-epistemics">writing</a> about it, and <a href="https://elicit.com/blog/situational-awareness-april-2026">Elicit</a> working on directly in the for-profit space). Also anecdotally I hear researchers thinking more about ideas like &#8220;epistemic security&#8221; or &#8220;cognitive security&#8221; or &#8220;cognitive sovereignty&#8221; as well as ways to improve information environments without restricting speech and expression.</p><p>I share your enthusiasm for more work in this area &#8211; both on benchmarking but also technology that better enables open contestation of ideas (inspired by classical liberal premises, and Mill&#8217;s ideas on this). If readers are working on this please do reach out!</p><div><hr></div><p><em><a href="https://cosmos-institute.org/">Cosmos Institute</a> is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund AI prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.cosmos-institute.org/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[AMA with Brendan McCord]]></title><description><![CDATA[Cosmos hits 20,000 subscribers. Ask me anything.]]></description><link>https://blog.cosmos-institute.org/p/ama-with-brendan-mccord</link><guid isPermaLink="false">https://blog.cosmos-institute.org/p/ama-with-brendan-mccord</guid><dc:creator><![CDATA[Brendan McCord]]></dc:creator><pubDate>Fri, 10 Apr 2026 14:03:16 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!nVEh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd9c46f3-3b30-4770-bc51-2809008be5bd_1714x1099.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nVEh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd9c46f3-3b30-4770-bc51-2809008be5bd_1714x1099.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nVEh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd9c46f3-3b30-4770-bc51-2809008be5bd_1714x1099.png 424w, https://substackcdn.com/image/fetch/$s_!nVEh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd9c46f3-3b30-4770-bc51-2809008be5bd_1714x1099.png 848w, https://substackcdn.com/image/fetch/$s_!nVEh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd9c46f3-3b30-4770-bc51-2809008be5bd_1714x1099.png 1272w, https://substackcdn.com/image/fetch/$s_!nVEh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd9c46f3-3b30-4770-bc51-2809008be5bd_1714x1099.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nVEh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd9c46f3-3b30-4770-bc51-2809008be5bd_1714x1099.png" width="1714" height="1099" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dd9c46f3-3b30-4770-bc51-2809008be5bd_1714x1099.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1099,&quot;width&quot;:1714,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3114416,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nVEh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd9c46f3-3b30-4770-bc51-2809008be5bd_1714x1099.png 424w, https://substackcdn.com/image/fetch/$s_!nVEh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd9c46f3-3b30-4770-bc51-2809008be5bd_1714x1099.png 848w, https://substackcdn.com/image/fetch/$s_!nVEh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd9c46f3-3b30-4770-bc51-2809008be5bd_1714x1099.png 1272w, https://substackcdn.com/image/fetch/$s_!nVEh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd9c46f3-3b30-4770-bc51-2809008be5bd_1714x1099.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This week, we crossed 20,000 subscribers on Substack. Thank you to everyone who has read, shared, and engaged with our work. </p><p>We&#8217;ve written about <a href="https://blog.cosmos-institute.org/p/the-claude-boys">Claude Boys</a> to <a href="https://blog.cosmos-institute.org/p/coasean-bargaining-at-scale">Coasean bargaining</a> to the <a href="https://blog.cosmos-institute.org/p/brave-new-nudge">perils of liberal nudging</a>. Reading the comments has often been as rewarding as writing the posts. To mark the milestone, I&#8217;ll be answering your questions on Wednesday April 15.</p><p><strong>Drop your question in the comments below and upvote the ones you want answered. I&#8217;ll start responding next week and I&#8217;ll try to take as many as I can.</strong></p><p>There are a few things I&#8217;ve been thinking about that we haven&#8217;t written about yet. This seems like the right place to start.</p><p>Ask about Cosmos, human autonomy, AI x philosophy, or what people in our network are building. Especially questions that are hard, that relate to how we approach AI as builders, or that challenge our assumptions.</p><p>- Brendan</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/p/ama-with-brendan-mccord/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.cosmos-institute.org/p/ama-with-brendan-mccord/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[You Are Not a Function]]></title><description><![CDATA[Why the Race to Stay Useful is a Trap]]></description><link>https://blog.cosmos-institute.org/p/you-are-not-a-function</link><guid isPermaLink="false">https://blog.cosmos-institute.org/p/you-are-not-a-function</guid><dc:creator><![CDATA[Brendan McCord]]></dc:creator><pubDate>Fri, 03 Apr 2026 14:35:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!qvH2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F822ebd2d-7953-4acd-9b28-6666fe9aeddf_1600x1215.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qvH2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F822ebd2d-7953-4acd-9b28-6666fe9aeddf_1600x1215.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qvH2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F822ebd2d-7953-4acd-9b28-6666fe9aeddf_1600x1215.png 424w, https://substackcdn.com/image/fetch/$s_!qvH2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F822ebd2d-7953-4acd-9b28-6666fe9aeddf_1600x1215.png 848w, https://substackcdn.com/image/fetch/$s_!qvH2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F822ebd2d-7953-4acd-9b28-6666fe9aeddf_1600x1215.png 1272w, https://substackcdn.com/image/fetch/$s_!qvH2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F822ebd2d-7953-4acd-9b28-6666fe9aeddf_1600x1215.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qvH2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F822ebd2d-7953-4acd-9b28-6666fe9aeddf_1600x1215.png" width="1456" height="1106" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/822ebd2d-7953-4acd-9b28-6666fe9aeddf_1600x1215.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1106,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qvH2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F822ebd2d-7953-4acd-9b28-6666fe9aeddf_1600x1215.png 424w, https://substackcdn.com/image/fetch/$s_!qvH2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F822ebd2d-7953-4acd-9b28-6666fe9aeddf_1600x1215.png 848w, https://substackcdn.com/image/fetch/$s_!qvH2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F822ebd2d-7953-4acd-9b28-6666fe9aeddf_1600x1215.png 1272w, https://substackcdn.com/image/fetch/$s_!qvH2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F822ebd2d-7953-4acd-9b28-6666fe9aeddf_1600x1215.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em> The Triumph of Love and Beauty </em>by Thomas Willeboirts Bosschaert (1630)</figcaption></figure></div><p>In the autumn of 1809, Prussia was a country that no longer knew what it was for. Three years earlier, Napoleon had destroyed its army in an afternoon and walked into Berlin without resistance. The king fled. Half the territory was gone, the treasury empty. French soldiers were still garrisoned in the capital.</p><p>As Prussia began rebuilding from the wreckage, most people assumed it needed more officers, administrators, and engineers. People who could do things. The task of designing the new system of education fell to a thirty-two-year-old diplomat named Wilhelm von Humboldt. He gave them something else entirely.</p><p>In a series of memoranda written over the next year, he laid out a vision for a new university in Berlin organized around <em>Bildung</em>. The word has no English equivalent. &#8220;Education&#8221; is too narrow, &#8220;self-improvement&#8221; too thin. &#8220;Formation&#8221; gets closest but still misses its moral weight.</p><blockquote><p>Humboldt&#8217;s Bildung means the free, harmonious development of a human being&#8217;s powers into a complete and consistent whole, through encounter with the world in its variety and resistance.</p></blockquote><p>Mill, who took the idea from Humboldt, put it more simply: a human being is more like a tree than a steam engine.</p><p>Humboldt proposed a university where professors and students would be joined in the pursuit of knowledge, unconstrained by political demands. In a defeated nation hungry for officers and administrators, he was arguing for formation before function.</p><p>The ideal of the modern research university, with its union of teaching and inquiry, its seminar culture, and its commitment to academic freedom, descends from what Humboldt designed in those desperate months.</p><p>Today, the culture of Bildung that animated the university survives only at the margins, sustained by people stubborn enough to work against the grain of the institutions around them. Credentialism twisted the university into a vendor of certificates, and the formation of the student as a complete human being came to seem anachronistic. The cathedrals remain, but not the faith.</p><p>We are in a moment that rhymes with Humboldt&#8217;s own. Technological pressure is once again pushing education toward the practical. <a href="https://blog.cosmos-institute.org/p/what-anyone-building-a-new-university">The evidence of institutional collapse is everywhere</a>: flagship universities slashing PhD admissions, hundreds of degree programs have been eliminated, dozens of small colleges have closed, and a decline in the college-age population still lies ahead.</p><p>The loudest responses to the crisis have come from outside the university. Alex Karp tells young people to skip college and learn a trade. Marc Andreessen argues the university is a credentialing middleman and should be disintermediated. Both are right that the university is failing. But if the answer to a broken formation system is to skip formation altogether, you have already conceded that education is justified only by utility. </p><p>Neither is asking the question Humboldt asked: What is a human being, that education should serve it?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.cosmos-institute.org/subscribe?"><span>Subscribe now</span></a></p><h3><strong>The Tree and the Trap</strong></h3><p>Before Humboldt, the Prussian system assigned students to a vocation, trained them in it, and delivered them into their function. Even on those terms, Bildung wins. Broadly cultivated judgment produces better doctors and better engineers. Technical skill on a foundation of general cultivation is more resilient and more humane than technical skill resting on nothing.</p><p>But if that were the whole justification, Bildung would be nothing more than a roundabout way of minting skilled professionals. A pedagogy can be justified by output. Bildung cannot.</p><p>A tree does not exist in order to produce lumber. You can make lumber from it, and good lumber is nothing to sneer at. But if you look at a tree and see only lumber, you have missed what is standing in front of you. Something is growing there under its own power, toward its own form, and the growing is not a means to some further end.</p><p>Humboldt&#8217;s claim about human beings is the same shape. A person is a self-developing being whose worth is not exhausted by function.</p><p>Bildung braids together the Kantian claim that no person is merely a means, the Greek ideal of harmonious excellence across mind and body and character, and the Romantic conviction that we develop through encounter with what resists us.</p><p>You have had the experience even if you never had a word for it. Real engagement with something that has its own demands&#8212;a hard problem, a serious book, a gifted teacher&#8212;changes who you are. You could not have planned the person you became.</p><p>Such formation is not the property of any particular university department. This is not a &#8220;save the humanities&#8221; argument. A coder who, after tackling a hard systems-design problem, comes out thinking differently about complexity, tradeoffs, and the limits of formal reasoning has undergone a kind of Bildung&#8212;but only if the encounter changed who they are, not just what they can do.</p><p>The resistance to all this is understandable.</p><p>In a world where your economic value can evaporate overnight, &#8220;Become a whole person&#8221; sounds like advice from someone who has never worried about next month&#8217;s rent. The utilitarian case for education has the force of necessity behind it. For millions of people, making yourself useful is what responsibility to their families demands.</p><p>Yet if the response to being replaceable is always to train for a different function, you have entered a race you structurally cannot win. The principle that makes your education valuable is the same principle that makes you disposable the moment the function migrates.</p><p>The scramble into computer science was an early sign of the trap: students rushed toward the field that seemed safest, and then AI began destabilizing the very functions it trained them to perform. The flight to function looks rational from inside it; that is what makes it a trap.</p><h3><strong>Solitude and Freedom</strong></h3><p>Bildung cannot be specified in advance. Once you define what the formed person looks like, you have replaced formation with training. So how do you build institutions around it?</p><p>Humboldt&#8217;s solution was to design an environment rather than a curriculum. He based the University of Berlin on two principles: solitude and freedom, though they meant something precise in his hands. The university would not answer to the demand for immediate use and  inquiry would follow the question wherever it led, unconstrained by predetermined ends.</p><p>The undoing of that design came in two phases.</p><p>First, a slow hollowing. The German research model crossed the Atlantic when Johns Hopkins was founded in 1876 on Humboldt&#8217;s model. But disciplines hardened into guilds and career tracks, postwar federal funding built research enterprises increasingly detached from teaching, and mass enrollment turned the degree into a sorting mechanism. The university no longer had a single organizing purpose, and no one was left responsible for the student as a whole person. By the mid-twentieth century, the university president Clark Kerr could describe the resulting &#8220;multiversity&#8221; as a collection of unrelated enterprises held together by a common heating plant.</p><p>Then, a rejection of the cure. Recently, at the University of Tulsa, the philosopher Jennifer Frey built a Great Books honors college from scratch. It was a serious formation program inside an institution organized around credentialing. The university removed her, restructured the program, and replaced the deanship with a directorship.</p><h3><strong>Back to Schol&#233;</strong></h3><p>But even if the institutions had held, formation requires something they could never secure at scale: time.</p><p>Aristotle called it <em>schol&#233;</em>. Humboldt had a related word for it: <em>Mu&#223;e</em>. Both named a kind of structured freedom for the work of becoming, and for most of history that freedom was radically exclusive.</p><p>Aristotle could imagine the highest forms of human flourishing only for those relieved of labor by wealth and the work of subordinates and slaves. The good life required freedom from necessity, and in his world only a few could have it. But in the first book of the <em>Politics</em> he imagined something stranger: that if shuttles could weave by themselves and picks could play the lyre, craftsmen would need no subordinates and masters would need no slaves. The <a href="https://blog.cosmos-institute.org/p/reading-group-recovering-the-intellectual">&#8220;self-guided machine&#8221;</a> would mean that the material basis for leisure no longer depended on the unfreedom of others. It is one of the oldest thought experiments in Western philosophy, and we are now enacting it.</p><p>Aristotle did not celebrate the prospect. He understood that freedom from necessity does not automatically yield the pursuits that make such freedom worth having. In his account, those with wealth and leisure often turned to unlimited acquisition or bodily gratification rather than to the activities that justify leisure in the first place.</p><p>With AI, we are building something like self-guided machines. Whether these systems liberate or merely displace is not settled. But the possibility of leisure at scale is real enough to become a serious question. </p><p>If AI can compress parts of instruction, it may deepen learning where it is used and clear ground for formation where it gives time back. But only if it <a href="https://www.aei.org/technology-and-innovation/ai-works-in-education-when-it-makes-learning-harder-not-easier/">preserves productive struggle</a> rather than bypassing it. </p><p>The alternative is already visible: <a href="https://www.youtube.com/watch?v=ibPycvYASKk">autocomplete for life</a>. Not just help with expression, but the slow outsourcing of judgment itself. That is Bildung&#8217;s antithesis.</p><p>Worse, the same technological society enabling leisure is also shaping the desires of the people who receive it. If our dispositions have already been trained toward optimization and outsourced judgment, the freed hours may arrive in hands that no longer know what to do with them.</p><p>For most of history, the conditions of formation were reserved for the few. The capacity for it was not. If schol&#233; at scale is now possible, refusing to pursue it ratifies a world in which full human development remains the privilege of those who can afford time.</p><h3><strong>In the Afterglow</strong></h3><p>Bildung is, at its normative core, anti-servility: the effort to form people who cannot be reduced to instruments of external authority, whether state, market, or algorithm.</p><p>The people who use AI well right now are drawing on judgment they formed before these tools became ambient. They know when to trust an LLM&#8217;s output and when to push back because they learned to read, argue, and sustain attention under conditions in which those acts were not so easily outsourced. They bring something the tool cannot supply.</p><p>Nietzsche thought secular liberals were living off the moral capital of a Christianity they had officially abandoned: inherited capacities that could persist for a time even after the culture that formed them had ceased to renew them. The kind of judgment this essay is defending may be a similar afterglow, formed in a world before AI mediated everything.</p><p>Without that judgment, you get <a href="https://blog.cosmos-institute.org/p/the-claude-boys">agency without autonomy</a>.</p><p>If the capacities required for non-servile life in an AI world were all formed in a pre-AI world, what happens when that formation stops? You can live on an inheritance for a while. You cannot educate a civilization on inherited judgment forever.</p><div><hr></div><p><em><a href="https://cosmos-institute.org/">Cosmos Institute</a> is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund AI prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.cosmos-institute.org/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Reading List: AI and the Future of Education]]></title><description><![CDATA[What anyone building a new university needs to read]]></description><link>https://blog.cosmos-institute.org/p/what-anyone-building-a-new-university</link><guid isPermaLink="false">https://blog.cosmos-institute.org/p/what-anyone-building-a-new-university</guid><dc:creator><![CDATA[Cosmos Institute]]></dc:creator><pubDate>Fri, 27 Mar 2026 15:06:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!oCy9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9c311ca-6189-420d-8d3a-7ec30a357e2a_635x315.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oCy9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9c311ca-6189-420d-8d3a-7ec30a357e2a_635x315.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oCy9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9c311ca-6189-420d-8d3a-7ec30a357e2a_635x315.png 424w, https://substackcdn.com/image/fetch/$s_!oCy9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9c311ca-6189-420d-8d3a-7ec30a357e2a_635x315.png 848w, https://substackcdn.com/image/fetch/$s_!oCy9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9c311ca-6189-420d-8d3a-7ec30a357e2a_635x315.png 1272w, https://substackcdn.com/image/fetch/$s_!oCy9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9c311ca-6189-420d-8d3a-7ec30a357e2a_635x315.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oCy9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9c311ca-6189-420d-8d3a-7ec30a357e2a_635x315.png" width="728" height="361.13385826771656" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a9c311ca-6189-420d-8d3a-7ec30a357e2a_635x315.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:315,&quot;width&quot;:635,&quot;resizeWidth&quot;:728,&quot;bytes&quot;:355914,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.cosmos-institute.org/i/192310307?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9c311ca-6189-420d-8d3a-7ec30a357e2a_635x315.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!oCy9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9c311ca-6189-420d-8d3a-7ec30a357e2a_635x315.png 424w, https://substackcdn.com/image/fetch/$s_!oCy9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9c311ca-6189-420d-8d3a-7ec30a357e2a_635x315.png 848w, https://substackcdn.com/image/fetch/$s_!oCy9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9c311ca-6189-420d-8d3a-7ec30a357e2a_635x315.png 1272w, https://substackcdn.com/image/fetch/$s_!oCy9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9c311ca-6189-420d-8d3a-7ec30a357e2a_635x315.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>A Reading from Homer </em>by Lawrence Alma-Tadema <em>(1885)</em></figcaption></figure></div><p>The university is under siege.</p><p>Harvard, sitting on a $57 billion endowment, slashed 75 percent of its science and 60 percent of its humanities PhD admissions. Indiana mandated the elimination of over 400 degree programs at its public universities. Classics, comparative literature, and foreign languages at the state flagship are gone. At the University of Tulsa, a philosopher <a href="https://mindmatters.ai/2025/07/which-way-oh-modern-university/">built a Great Books honors college from scratch</a>, grew enrolment 500 percent, and attracted a quarter of each freshman class to a curriculum running from Homer to Arendt. The university removed her.</p><p>Sixty-four small colleges have closed since 2020, and the demographic cliff, a 13 percent decline in college-age students projected through 2041, hasn&#8217;t even started.</p><p>Meanwhile, 52 percent of recent graduates are working jobs that don&#8217;t require the degree that they borrowed $30,000 to get. Palantir CEO Alex Karp is <a href="https://fortune.com/2026/03/24/palantir-ceo-alex-karp-two-people-successful-in-ai-era-vocational-skills-neurodivergence-gen-z-career-advice/">telling young people</a> to skip elite colleges entirely, saying the only paths left are skilled trades or neurodivergence.</p><p>AI will accelerate this. The skills the degree was supposed to impart &#8211; researching, analyzing, drafting, coding &#8211; are increasingly things a machine can do.</p><p>As the old institutions break down, a new generation of educational founders is running experiments. New schools are emerging, each making a different bet on two questions that anyone building an educational institution has to answer:</p><ul><li><p><strong>The first question is about purpose.</strong> Do you focus on specific, trainable skills, or on a broader, if more intangible formation, which includes the development of judgment, attention, and moral seriousness?</p></li><li><p><strong>The second question is about technology.</strong> Do you build AI into the core of the educational experience or keep it out because the difficulty is where the learning happens?</p></li></ul><p>These two axes produce a landscape.</p><p>At one end, programs like <a href="https://www.gauntletai.com/">Gauntlet</a> train elite AI engineers in ten weeks: 80-100 hours a week of building, a guaranteed $200K job offer at the end, costs paid entirely by employers. </p><p>At the other, institutions like <a href="https://www.sjc.edu/">St. John&#8217;s College</a> run intensive liberal arts programs built on sustained attention and close reading, with no AI in sight. </p><p><a href="https://www.asu.edu/">Arizona State</a> has embedded AI from coursework to advising and has struck partnerships with a number of high-profile AI companies, but the core teaching model hasn&#8217;t changed. </p><p><a href="https://uaustin.org/">The University of Austin</a> has proposed splitting the day between device-free seminars and intensive AI work. </p><p>Meanwhile, the bulk of the higher-education system is either banning AI from the classroom or pretending that it doesn&#8217;t exist.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!thSa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953eae92-0c67-4893-97ce-e85cdff28337_1094x668.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!thSa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953eae92-0c67-4893-97ce-e85cdff28337_1094x668.png 424w, https://substackcdn.com/image/fetch/$s_!thSa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953eae92-0c67-4893-97ce-e85cdff28337_1094x668.png 848w, https://substackcdn.com/image/fetch/$s_!thSa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953eae92-0c67-4893-97ce-e85cdff28337_1094x668.png 1272w, https://substackcdn.com/image/fetch/$s_!thSa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953eae92-0c67-4893-97ce-e85cdff28337_1094x668.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!thSa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953eae92-0c67-4893-97ce-e85cdff28337_1094x668.png" width="1094" height="668" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/953eae92-0c67-4893-97ce-e85cdff28337_1094x668.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:668,&quot;width&quot;:1094,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:100413,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.cosmos-institute.org/i/192310307?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953eae92-0c67-4893-97ce-e85cdff28337_1094x668.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!thSa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953eae92-0c67-4893-97ce-e85cdff28337_1094x668.png 424w, https://substackcdn.com/image/fetch/$s_!thSa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953eae92-0c67-4893-97ce-e85cdff28337_1094x668.png 848w, https://substackcdn.com/image/fetch/$s_!thSa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953eae92-0c67-4893-97ce-e85cdff28337_1094x668.png 1272w, https://substackcdn.com/image/fetch/$s_!thSa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F953eae92-0c67-4893-97ce-e85cdff28337_1094x668.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"></figcaption></figure></div><p>These questions are not new. They go back to the founding of the first American colleges and European research universities. </p><p>In partnership with <a href="https://www.libertyfund.org/">Liberty Fund</a>, Cosmos held a seminar to think through them with a group of founders, scholars, and institution-builders. The participants included philosophers from UT Austin and Ohio State, researchers from MIT Media Lab, RAND, and leading AI labs, education policymakers from the Texas Higher Education Coordinating Board, and builders from Fractal Tech and Alpha School.</p><p>The reading list we assembled traces these tensions to their origins: from the earliest American proposals for public education through the founding documents of the research university, the mid-century debates about liberal learning, and the first serious writing about what computers might do to the relationship between learner and knowledge.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.cosmos-institute.org/subscribe?"><span>Subscribe now</span></a></p><h3><strong>Session I: The Promise and Crisis of the University</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6EV7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb1b9257-53a4-4249-af75-68979834fddd_1505x550.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6EV7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb1b9257-53a4-4249-af75-68979834fddd_1505x550.png 424w, https://substackcdn.com/image/fetch/$s_!6EV7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb1b9257-53a4-4249-af75-68979834fddd_1505x550.png 848w, https://substackcdn.com/image/fetch/$s_!6EV7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb1b9257-53a4-4249-af75-68979834fddd_1505x550.png 1272w, https://substackcdn.com/image/fetch/$s_!6EV7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb1b9257-53a4-4249-af75-68979834fddd_1505x550.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6EV7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb1b9257-53a4-4249-af75-68979834fddd_1505x550.png" width="1456" height="532" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eb1b9257-53a4-4249-af75-68979834fddd_1505x550.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:532,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6EV7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb1b9257-53a4-4249-af75-68979834fddd_1505x550.png 424w, https://substackcdn.com/image/fetch/$s_!6EV7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb1b9257-53a4-4249-af75-68979834fddd_1505x550.png 848w, https://substackcdn.com/image/fetch/$s_!6EV7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb1b9257-53a4-4249-af75-68979834fddd_1505x550.png 1272w, https://substackcdn.com/image/fetch/$s_!6EV7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb1b9257-53a4-4249-af75-68979834fddd_1505x550.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>We are not the first to ask what the university is for.</p><p>This session combined five voices, spanning three centuries. Franklin wants a practical institution for a practical republic. Humboldt argues that the university exists for the development of the whole person, not the production of useful professionals. Oakeshott insists that education is how a human being becomes one. Bloom diagnoses what happens when that inheritance is abandoned. And Caplan asks whether the whole enterprise is a $240,000 receipt that signals you can show up on time.</p><p>If Caplan is even partly right, the reformers need to explain what they are offering that a credential cannot capture. If Humboldt is right, the reformers need to explain why two centuries of institutions built on his model ended up producing exactly the credentialism he warned against.</p><ul><li><p>Benjamin Franklin, <em>Proposals Relating to the Education of Youth in Pennsilvania</em>. (<a href="https://archives.upenn.edu/digitized-resources/docs-pubs/franklin-proposals/">link</a>)</p></li><li><p>Wilhelm von Humboldt, &#8220;On the Internal and External Organization of the Higher Academic Institutions in Berlin&#8221;. (<a href="https://germanhistorydocs.org/en/the-holy-roman-empire-1648-1815/wilhelm-von-humboldt-s-treatise-quot-on-the-internal-and-external-organization-of-the-higher-scientific-institutions-in-berlin-quot-1810.pdf">link</a>)</p></li><li><p>Michael Oakeshott, <em>The Voice of Liberal Learning</em>, Introduction.</p></li><li><p>Allan Bloom, &#8220;Our Listless Universities.&#8221; (<a href="https://www.nationalreview.com/2006/09/our-listless-universities-williumrex/">link</a>)</p></li><li><p>Bryan Caplan, <em>The Case Against Education</em>, Chapter 1, &#8220;The Magic of Education.&#8221;</p></li></ul><h3><strong>Session II: What Formation Requires</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!WTLW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe54199c0-e5ab-4e01-82d8-96028375e31d_1490x550.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!WTLW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe54199c0-e5ab-4e01-82d8-96028375e31d_1490x550.png 424w, https://substackcdn.com/image/fetch/$s_!WTLW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe54199c0-e5ab-4e01-82d8-96028375e31d_1490x550.png 848w, https://substackcdn.com/image/fetch/$s_!WTLW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe54199c0-e5ab-4e01-82d8-96028375e31d_1490x550.png 1272w, https://substackcdn.com/image/fetch/$s_!WTLW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe54199c0-e5ab-4e01-82d8-96028375e31d_1490x550.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!WTLW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe54199c0-e5ab-4e01-82d8-96028375e31d_1490x550.png" width="1456" height="537" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e54199c0-e5ab-4e01-82d8-96028375e31d_1490x550.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:537,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!WTLW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe54199c0-e5ab-4e01-82d8-96028375e31d_1490x550.png 424w, https://substackcdn.com/image/fetch/$s_!WTLW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe54199c0-e5ab-4e01-82d8-96028375e31d_1490x550.png 848w, https://substackcdn.com/image/fetch/$s_!WTLW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe54199c0-e5ab-4e01-82d8-96028375e31d_1490x550.png 1272w, https://substackcdn.com/image/fetch/$s_!WTLW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe54199c0-e5ab-4e01-82d8-96028375e31d_1490x550.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>If the first session asked what the university is for, the second focused on what actually changes a person. Weil makes the strongest claim: attention is all you need. Klein argues that liberal education requires putting your own opinions genuinely at risk, and that the resistance to doing so is rooted in something specifically human: we build our identities on what we think we know. Gadamer argues that practical wisdom cannot be taught as a curriculum; it grows out of ethos, the character already formed by living in a particular community.</p><p>But what if the boundaries of the person aren&#8217;t where we think they are? Clark and Chalmers ask whether cognition is even confined to the skull. If the mind extends into the tools and environments we think with, what counts as &#8220;the student&#8221;? Shanahan complicates the picture further: LLMs create a compelling illusion of understanding, but they are fundamentally unlike us. If formation requires genuine encounter with other minds, what happens when the most available interlocutor is a machine?</p><ul><li><p>Simone Weil, <em>Waiting for God</em>, &#8220;Reflections on the Right Use of School Studies with a View to the Love of God.&#8221;</p></li><li><p>Jacob Klein, &#8220;The Idea of Liberal Education&#8221; in <em>The Goals of Higher Education</em>, ed. W.D. Weatherford, Jr.</p></li><li><p>Hans-Georg Gadamer, &#8220;The Socratic Question and Aristotle&#8221;, <em>Continental Philosophy Review</em>.</p></li><li><p>Andy Clark and David Chalmers, &#8220;The Extended Mind&#8221;, <em>Analysis</em>. (<a href="https://era.ed.ac.uk/server/api/core/bitstreams/aac16bf6-a3d8-4112-aeba-e442b164209e/content">link</a>)</p></li><li><p>Murray Shanahan, &#8220;Talking about Large Language Models&#8221;, <em>Communications of the ACM</em>. (<a href="https://dl.acm.org/doi/10.1145/3624724">link</a>)</p></li></ul><h3>Session III: The Institutional Question</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DF0h!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf8aa6ab-da09-41eb-9395-ac0f7bbbe60d_1167x550.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DF0h!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf8aa6ab-da09-41eb-9395-ac0f7bbbe60d_1167x550.png 424w, https://substackcdn.com/image/fetch/$s_!DF0h!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf8aa6ab-da09-41eb-9395-ac0f7bbbe60d_1167x550.png 848w, https://substackcdn.com/image/fetch/$s_!DF0h!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf8aa6ab-da09-41eb-9395-ac0f7bbbe60d_1167x550.png 1272w, https://substackcdn.com/image/fetch/$s_!DF0h!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf8aa6ab-da09-41eb-9395-ac0f7bbbe60d_1167x550.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DF0h!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf8aa6ab-da09-41eb-9395-ac0f7bbbe60d_1167x550.png" width="724" height="341.21679520137104" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/df8aa6ab-da09-41eb-9395-ac0f7bbbe60d_1167x550.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:550,&quot;width&quot;:1167,&quot;resizeWidth&quot;:724,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DF0h!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf8aa6ab-da09-41eb-9395-ac0f7bbbe60d_1167x550.png 424w, https://substackcdn.com/image/fetch/$s_!DF0h!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf8aa6ab-da09-41eb-9395-ac0f7bbbe60d_1167x550.png 848w, https://substackcdn.com/image/fetch/$s_!DF0h!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf8aa6ab-da09-41eb-9395-ac0f7bbbe60d_1167x550.png 1272w, https://substackcdn.com/image/fetch/$s_!DF0h!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf8aa6ab-da09-41eb-9395-ac0f7bbbe60d_1167x550.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Good ideas about education are easy. Institutions that embody them are hard. This session focused on the gap. Jefferson proposes education as grafting: implanting something new onto the wild stock of human nature. Newman argues that the value of a university lies in the sheer density of its intellectual community, that students gain from living among those who represent the whole circle of knowledge even if they can never study it all.</p><p>Karlsson brings this into the present. His argument is sobering: AI tutors will be held back by culture, rather than technology. Motivated learners embedded in high-growth communities will use AI to accelerate, while everyone else will use it to avoid difficulty. The real challenge will be building strong cultural norms against taking the path of least resistance.</p><ul><li><p>Thomas Jefferson, &#8220;Draft of the Rockfish Gap Report of the University of Virginia.&#8221; (<a href="https://founders.archives.gov/documents/Jefferson/03-13-02-0197-0004">link</a>)</p></li><li><p>John Henry Newman, <em>The Idea of a University</em>, Discourse 5, &#8220;Knowledge its Own End.&#8221; (<a href="https://www.newmanreader.org/works/idea/">link</a>)</p></li><li><p>Henrik Karlsson, &#8220;AI tutors will be held back by culture.&#8221; (<a href="https://www.henrikkarlsson.xyz/p/ai-tutors">link</a>)</p></li></ul><h3><strong>Session IV: Education and the Machine</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!amQY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c5f09cf-8a6e-4326-a912-9103fb432743_1290x550.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!amQY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c5f09cf-8a6e-4326-a912-9103fb432743_1290x550.png 424w, https://substackcdn.com/image/fetch/$s_!amQY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c5f09cf-8a6e-4326-a912-9103fb432743_1290x550.png 848w, https://substackcdn.com/image/fetch/$s_!amQY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c5f09cf-8a6e-4326-a912-9103fb432743_1290x550.png 1272w, https://substackcdn.com/image/fetch/$s_!amQY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c5f09cf-8a6e-4326-a912-9103fb432743_1290x550.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!amQY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c5f09cf-8a6e-4326-a912-9103fb432743_1290x550.png" width="1290" height="550" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8c5f09cf-8a6e-4326-a912-9103fb432743_1290x550.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:550,&quot;width&quot;:1290,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!amQY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c5f09cf-8a6e-4326-a912-9103fb432743_1290x550.png 424w, https://substackcdn.com/image/fetch/$s_!amQY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c5f09cf-8a6e-4326-a912-9103fb432743_1290x550.png 848w, https://substackcdn.com/image/fetch/$s_!amQY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c5f09cf-8a6e-4326-a912-9103fb432743_1290x550.png 1272w, https://substackcdn.com/image/fetch/$s_!amQY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8c5f09cf-8a6e-4326-a912-9103fb432743_1290x550.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This session turned directly to the question of what technology does to thinking. Licklider, writing in 1960, imagined a partnership: machines handle the routine cognitive work, freeing humans for insight and decision. Papert envisaged a more profound interplay. He argued that when a child learns to program, the relationship between learner and knowledge is fundamentally transformed. The child is no longer receiving explanations but building things, and the building changes how she thinks.</p><p>Matuschak and Nielsen pick up the thread sixty years later and find it frayed. The pioneering visions of tools for thought are treated as nostalgia in technology circles. There is little determined effort to build tools that genuinely transform how people understand. The discussion focused on whether the current wave of AI represented a chance to revive the Licklider-Papert vision or whether tools for thought were likely to become an even more distant memory.</p><ul><li><p>JCR Licklider, &#8220;Man-Computer Symbiosis,&#8221; <em>IRE Transactions on Human Factors in Electronics</em>.</p></li><li><p>Seymour Papert, <em>Mindstorms: Children, Computers, and Powerful Ideas</em>, Ch. 1.</p></li><li><p>Andy Matuschak and Michael Nielsen, &#8220;How can we develop transformative tools for thought?&#8221; (<a href="https://numinous.productions/ttft/">link</a>)</p></li></ul><h3><strong>Session V: Building the New Academy</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nlI9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F909e3571-2bc8-4188-bd82-140e146aef93_1582x550.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nlI9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F909e3571-2bc8-4188-bd82-140e146aef93_1582x550.png 424w, https://substackcdn.com/image/fetch/$s_!nlI9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F909e3571-2bc8-4188-bd82-140e146aef93_1582x550.png 848w, https://substackcdn.com/image/fetch/$s_!nlI9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F909e3571-2bc8-4188-bd82-140e146aef93_1582x550.png 1272w, https://substackcdn.com/image/fetch/$s_!nlI9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F909e3571-2bc8-4188-bd82-140e146aef93_1582x550.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nlI9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F909e3571-2bc8-4188-bd82-140e146aef93_1582x550.png" width="1456" height="506" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/909e3571-2bc8-4188-bd82-140e146aef93_1582x550.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:506,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nlI9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F909e3571-2bc8-4188-bd82-140e146aef93_1582x550.png 424w, https://substackcdn.com/image/fetch/$s_!nlI9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F909e3571-2bc8-4188-bd82-140e146aef93_1582x550.png 848w, https://substackcdn.com/image/fetch/$s_!nlI9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F909e3571-2bc8-4188-bd82-140e146aef93_1582x550.png 1272w, https://substackcdn.com/image/fetch/$s_!nlI9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F909e3571-2bc8-4188-bd82-140e146aef93_1582x550.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The final session focused on what comes next. Flexner&#8217;s defense of useless knowledge, written from the Institute for Advanced Study, is the purest case for protecting inquiry from the demand for application. Bloom&#8217;s portrait of Hutchins at UChicago shows what it looks like when someone actually tries to build an institution around these convictions, and the political will it requires. Simondon challenges the premise that technology and culture are opposed, arguing that the hostility between them is a sign of ignorance.</p><p>Engelbart closes the list with a constraint AI is testing: &#8220;The entire effect of an individual on the world stems essentially from what he can transmit to the world through his limited motor channels.&#8221; As those channels widen, the bottleneck shifts to whether you have something worth transmitting.</p><ul><li><p>Alexander Flexner, &#8220;The Usefulness of Useless Knowledge,&#8221; <em>Harper&#8217;s Magazine</em>. (<a href="https://www.ias.edu/sites/default/files/library/UsefulnessHarpers.pdf">link</a>)</p></li><li><p>Allan Bloom, &#8220;Hutchins&#8217;s Idea of a University,&#8221; <em>Times Literary Supplement</em>.</p></li><li><p>Gilbert Simondon, <em>On the Mode of Existence of Technical Objects</em>, &#8220;Introduction.&#8221;</p></li><li><p>Douglas C. Engelbart, &#8220;Augmenting Human Intellect: A Conceptual Framework,&#8221; SRI Summary Report AFOSR-3223.</p></li></ul><p>Is there an institution that protects the conditions for deep formation and teaches students to master the tools that will define their century?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.cosmos-institute.org/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p><em><a href="https://cosmos-institute.org/">Cosmos Institute</a> is the Academy for Philosopher-Builders, with programs, grants, events, and fellowships for those building AI for human flourishing</em></p>]]></content:encoded></item><item><title><![CDATA[Science Needs Scientists]]></title><description><![CDATA[And Scientists Need Science]]></description><link>https://blog.cosmos-institute.org/p/science-needs-scientists</link><guid isPermaLink="false">https://blog.cosmos-institute.org/p/science-needs-scientists</guid><dc:creator><![CDATA[Cosmos Institute]]></dc:creator><pubDate>Fri, 20 Mar 2026 15:03:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!f9nQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521ad059-9919-4794-a00a-f88b09905e4d_749x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Today&#8217;s essay is a guest post by <a href="https://iuliaetal.wordpress.com/about/">Iulia Georgescu</a>, a physicist and independent scholar researching the history of computational physics.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!f9nQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521ad059-9919-4794-a00a-f88b09905e4d_749x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!f9nQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521ad059-9919-4794-a00a-f88b09905e4d_749x600.png 424w, https://substackcdn.com/image/fetch/$s_!f9nQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521ad059-9919-4794-a00a-f88b09905e4d_749x600.png 848w, https://substackcdn.com/image/fetch/$s_!f9nQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521ad059-9919-4794-a00a-f88b09905e4d_749x600.png 1272w, https://substackcdn.com/image/fetch/$s_!f9nQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521ad059-9919-4794-a00a-f88b09905e4d_749x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!f9nQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521ad059-9919-4794-a00a-f88b09905e4d_749x600.png" width="749" height="600" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/521ad059-9919-4794-a00a-f88b09905e4d_749x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:600,&quot;width&quot;:749,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!f9nQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521ad059-9919-4794-a00a-f88b09905e4d_749x600.png 424w, https://substackcdn.com/image/fetch/$s_!f9nQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521ad059-9919-4794-a00a-f88b09905e4d_749x600.png 848w, https://substackcdn.com/image/fetch/$s_!f9nQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521ad059-9919-4794-a00a-f88b09905e4d_749x600.png 1272w, https://substackcdn.com/image/fetch/$s_!f9nQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521ad059-9919-4794-a00a-f88b09905e4d_749x600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Harmony</em> by Remedios Varo (1956)</figcaption></figure></div><p>&#8220;Artificial Intelligence will revolutionize science.&#8221; The phrase is usually invoked by researchers when discussing the promise of thinking machines to help us understand the natural world. In the early 2020s, I subscribed to, and was perhaps even evangelizing for, the role of AI in scientific research. Like many of my colleagues, I was gripped by a new tool that promised so much.</p><p>More recently my view has changed. I still believe that AI will be fantastically useful, but not necessarily in the way we think. Discovery, after all, is not the same as understanding. As scientists, we need to engage with the process of inquiry to truly make sense of what we learn about the world. Only then can we understand.</p><h3>The Long History of &#8220;AI for Science&#8221; </h3><p>In the summer of 1953, physicists Enrico Fermi, John Pasta, and mathematicians Stanislaw Ulam and Mary Tsingou ran the first &#8220;numerical experiment&#8221; on MANIAC, one of the early electronic computers built at the Los Alamos National Laboratory after World War II. Computers were new and scientists were excited to explore their use in solving research problems like simulating the dynamics of atoms and molecules, studying tumor cell populations and investigating numerically fluid dynamics. They even created the first documented chess-playing program that defeated a human in the game.</p><p>This group decided to model a chain of oscillators (identical masses connected by springs), add a small nonlinearity (a quadratic or cubic term), and see what would happen. The expectation was that the system would reach equilibrium (common sense would predict that there is some wiggling around, but ultimately energy ends up equally distributed throughout the springs). But the results were surprising. Instead, the system showed a recurrent behavior where the energy spreads throughout the chain, then comes back, before spreading out again. This observation sparked interest in the study of nonlinear systems that would later lead, among other things, to the discovery of solitons (waves that travel freely preserving their shape) and the development of chaos theory in the 1960s-1970s.</p><p>If one was to assign a birthyear to computational physics, that honorific should go to 1953 where Fermi and colleagues made for the first time an unexpected discovery through a purely computational approach. Computers had been used for physics and astronomy calculations before &#8211; for example, in the 1930s mechanical IBM accounting machines were modified to solve the differential equations of planetary motion by numerical integration &#8211;  and had played a key role in the development of the atomic bomb during the Manhattan project. In the following decades computers would transform scientific research. Today, there are almost no advances in physics that are not enabled by some aspect of computer simulation.</p><p>Three years after the &#8220;first numerical experiment&#8221; the term <em>artificial intelligence</em> was coined and a first attempt was made to use automated reasoning to prove mathematical theorems through the Logic Theorist system in 1956. The first AI &#8220;expert&#8221; system, DENDRAL, was created in 1965. Combining a knowledge base with a reasoning engine, it was capable of determining the molecular structure of a compound from its mass spectra. Here, &#8220;expert&#8221; refers to a kind of AI system that encoded domain knowledge from human experts as rules and applied them through an inference engine.</p><p>Other examples include LHASA, a program designed in 1972 to discover sequences of reactions to synthesize a molecule; and the Automated Mathematician in 1977, a heuristic-based program designed to discover new mathematical concepts and theorems; and other expert systems like MYCIN (to identify bacteria) and PROSPECTOR (for mineral analysis). In the late 1980s and early 1990s, artificial neural networks were used to tackle problems in particle physics, astrophysics, and &#8211;&#8211; in a glimpse of what was to come &#8211;&#8211; predicting protein structure. By the 2000s, AI methods were in use in many areas of physics and even assisted data analysis leading to the discovery of the Higgs boson particle.</p><p>In the 2010s and 2020s, scientists began to use a wave of powerful neural network-based tools to solve research problems. Some of these advances, such as Google DeepMind&#8217;s AlphaFold protein prediction system, made headlines around the world. Others were important but less glamorous. Machine-learned interatomic potentials, for example, significantly speed up materials and chemistry simulations yet remain unknown outside the research community.</p><p>But if &#8220;AI for science&#8221; has a long history, why has it only recently started to make the news? In his 2003 book<em>,</em> Douglas S. Robertson coined the term &#8220;phase change&#8221; to describe a radical change that an instrument makes possible compared to the prior state of the art. AI may be a phase change in how we do science, but not only because of powerful individual tools. It is their extreme accessibility, from large language models to the AlphaFold Protein Structure Database, that sees advances in one field become instruments of exploration in others.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe to Cosmos Institute for updates including opportunities, content, and programs</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>Epistemic Enhancers</h3><p>Scientific instruments provide extensions to humans&#8217; senses. Telescopes and microscopes allowed us to see the far and the small. This type of enhancement is known as <em>extrapolation.</em> Another type of epistemic support is <em>conversion</em>, that is, transforming one modality into another (like sound into a visual image). Finally, we have those instruments that extend our capacities by giving access to phenomena that elude our senses, such as detecting radiation or magnetic fields. This is known as <em>augmentation</em>. These three moves &#8211; <em>extrapolation</em>, <em>conversion, and augmentation </em>&#8211; are all types of epistemic enhancers.</p><p>A central motivation behind scientific practice is understanding. But what <em>is</em> scientific understanding? The 2009 book <em>Scientific Understanding </em>edited by Henk W. de Regt, Sabina Leonelli, Kai Eigner suggests that there is no universal definition of scientific understanding. Understanding in physics differs from understanding in biology or engineering. Even within my own discipline, physics, what we mean by understanding is not straightforward. While clearly related to the explanation of a phenomenon, understanding is not precisely the same thing. There is a difference between understanding the phenomena and understanding the theories or models that explain that phenomenon.</p><p>A prerequisite for understanding is discovery insofar as one cannot understand a phenomenon that has not been observed. Discovery is the process or product of successful scientific inquiry. Objects of discovery are things, events, processes, causes, and properties as well as theories and hypotheses and their features. The 1987 book <em>Scientific Discovery</em> proposed that discovery in science can be broken down into problem solving tasks and therefore can be automated with computers.</p><p>The authors of the book built four types of programs to look for quantitative or qualitative laws and structural models. Then they used them individually and combined them to rediscover many laws in physics and chemistry. While the authors recognized that there is no unique process that accounts for scientific discovery, they showed that most of these can be cast as problem solving tasks that can be tackled with heuristics. Their method, based on tree searches, was general in theory but limited in practice. This was because the combinatorial explosion of possible paths made them intractable for problems beyond a modest scale.</p><p>Both traditional numerical methods and modern AI tools enhance epistemic extrapolation and conversion. The application of such methods represents a difference in kind as well as magnitude for scientific practice.  As Paul Humphreys put it: &#8220;This extrapolation of our computational abilities takes us to a region where the quantitatively different becomes the qualitatively different.&#8221; This is because these simulations cannot be carried out in practice except in regions of computational speed far beyond the capacities of humans. This dual articulation of the quantitative and the qualitative could be taken as an argument for why modern AI tools ought to usher in a new way of doing science.</p><p>In the 1960s mathematicians Bryan John Birch and Peter Swinnerton-Dyer used the EDSAC-2 computer at the University of Cambridge Computer Laboratory to run numerical calculations that allowed them to state a conjecture about the set of rational solutions to equations defining an elliptic curve. This is now known as Birch and Swinnerton-Dyer (BSD) conjecture and is one of the seven <a href="https://www.claymath.org/millennium-problems/">Millennium Prize Problems</a>. The computer helped them explore an abstract space so they could formulate the conjecture. This computer-assisted discovery falls in the final category of epistemic enhancers mentioned above: augmentation.</p><p>If the formulation of the BSD conjecture is a weaker example of augmentation, more recently AI methods helped mathematicians make a breakthrough towards solving it. Using machine learning, mathematicians discovered unexpected oscillatory patterns in the parameter space key to BSD (of hundreds of dimensions) which led to the &#8220;murmuration&#8221; <a href="https://www.quantamagazine.org/elliptic-curve-murmurations-found-with-ai-take-flight-20240305/">conjectures</a>. Progress has also been made towards solving another Millennium Prize Problem as AI tools were used to find <a href="https://www.quantamagazine.org/using-ai-mathematicians-find-hidden-glitches-in-fluid-equations-20260109/">potential singularities</a> in the Navier-Stokes equations.</p><p>Although automated theorem proving programs have been around for 80 years, these examples do suggest that we are witnessing, in Robertson&#8217;s terms, a phase change. The quantitatively different becomes the qualitatively different in a more interesting way. It&#8217;s speed, yes, but it&#8217;s also breadth. Take AlphaFold and the AlphaFold Protein Structure Database it made possible. What makes it revolutionary is not only the improved accuracy of the protein structure prediction, but also its usability by scientists from many fields (who can access hundreds of millions of protein structure predictions).</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LKP0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4433ea3c-d89e-400c-868b-ecd8e7e4df2e_1214x400.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LKP0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4433ea3c-d89e-400c-868b-ecd8e7e4df2e_1214x400.png 424w, https://substackcdn.com/image/fetch/$s_!LKP0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4433ea3c-d89e-400c-868b-ecd8e7e4df2e_1214x400.png 848w, https://substackcdn.com/image/fetch/$s_!LKP0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4433ea3c-d89e-400c-868b-ecd8e7e4df2e_1214x400.png 1272w, https://substackcdn.com/image/fetch/$s_!LKP0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4433ea3c-d89e-400c-868b-ecd8e7e4df2e_1214x400.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LKP0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4433ea3c-d89e-400c-868b-ecd8e7e4df2e_1214x400.png" width="1214" height="400" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4433ea3c-d89e-400c-868b-ecd8e7e4df2e_1214x400.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:400,&quot;width&quot;:1214,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:67388,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.cosmos-institute.org/i/191347337?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4433ea3c-d89e-400c-868b-ecd8e7e4df2e_1214x400.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LKP0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4433ea3c-d89e-400c-868b-ecd8e7e4df2e_1214x400.png 424w, https://substackcdn.com/image/fetch/$s_!LKP0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4433ea3c-d89e-400c-868b-ecd8e7e4df2e_1214x400.png 848w, https://substackcdn.com/image/fetch/$s_!LKP0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4433ea3c-d89e-400c-868b-ecd8e7e4df2e_1214x400.png 1272w, https://substackcdn.com/image/fetch/$s_!LKP0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4433ea3c-d89e-400c-868b-ecd8e7e4df2e_1214x400.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>We are yet to see the full extent of what AI tools can do for scientific research, but one differentiator between them and traditional scientific instruments, including computer simulation, is that they combine epistemic enhancers to explore vast dimensions of information. Remember DENDRAL, the early AI system determining the molecular structure of a compound from its mass spectra? Now, imagine the power of its modern incarnation which has access to several databases of molecular spectra, can cross-check with published scientific literature, and has the ability to write code to perform additional calculations if needed.</p><p>Or consider a powerful telescope that discovers a new exoplanet. Here an AI tool analyzes the spectra, determines the likely composition of the exoplanet atmosphere, compares it with the existing recorded information, runs simulations with the known and inferred parameters to better characterize the exoplanet and adds it to the catalog. All of these individual steps are already done separately and for each the use of AI tools can be seen as an evolutionary improvement. But the combination of these improvements, working together across different sources of information is potentially revolutionary. </p><p>If I&#8217;m permitted to add a new epistemic enhancer category for AI as an instrument the most appropriate would be <em>integration</em> (as in synthesizing across large amounts of very different types of information). When a tool synthesizes across hundreds of heterogeneous sources simultaneously, the inferential path from evidence to conclusion can become challenging for the scientist receiving the result to trace. The more dimensions integration spans, the wider this gap between discovery and reconstruction becomes. Where earlier epistemic enhancers extended what scientists could see or calculate, integration may increasingly determine what they conclude.</p><h3>Our Role as Scientists</h3><p>At the end of his book, Humphreys argued that computer simulation brings a &#8220;shift of emphasis in the scientific enterprise away from humans.&#8221; By the 1980s, researchers already knew that, at least in part, scientific discovery and problem generation could be successfully automated. Modern AI methods can do the same thing, only much faster and much better. I regularly hear from colleagues that in the coming years AI will take over many of the tasks associated with scientific inquiry.</p><p>Use of AI will likely mean more discoveries overall, but to discover is not to understand. Henk de Regt <a href="https://doi.org/10.1086/710520">argued</a> that: &#8220;scientists achieve understanding of phenomena by basing their explanations on intelligible theories. The intelligibility of theories is related to scientists&#8217; abilities: theories are intelligible if scientists have the skills to use those theories in fruitful ways.&#8221; Quantum mechanics is an example of a very effective, yet not particularly intelligible theory, that despite the practical successes still eludes physicists in its interpretation. If AI tools can optimize discovery and generate explanations, they could perhaps also help produce <a href="https://www.nature.com/articles/s42254-022-00497-5">more intelligible</a> theories.</p><p>Yet intelligibility is not a property of theories themselves. As de Regt suggests, it depends on the capacities of the scientists who use them. A theory becomes intelligible only when scientists acquire the skills to explore its implications and apply it fruitfully. Even if machines make discoveries and generate explanations, this kind of understanding still depends on the participation of human investigators. Put differently: scientists need science just as science needs scientists.</p><div><hr></div><p><em><a href="https://cosmos-institute.org/">Cosmos Institute</a> is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund fast prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for updates and essays</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[AI won’t fix central planning ]]></title><description><![CDATA[Even superintelligence needs a price]]></description><link>https://blog.cosmos-institute.org/p/ai-wont-fix-central-planning</link><guid isPermaLink="false">https://blog.cosmos-institute.org/p/ai-wont-fix-central-planning</guid><dc:creator><![CDATA[Alex Chalmers]]></dc:creator><pubDate>Fri, 13 Mar 2026 15:23:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!YQNj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cf6156-24c7-40d1-af38-b261bc07adc9_1600x1092.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YQNj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cf6156-24c7-40d1-af38-b261bc07adc9_1600x1092.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YQNj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cf6156-24c7-40d1-af38-b261bc07adc9_1600x1092.png 424w, https://substackcdn.com/image/fetch/$s_!YQNj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cf6156-24c7-40d1-af38-b261bc07adc9_1600x1092.png 848w, https://substackcdn.com/image/fetch/$s_!YQNj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cf6156-24c7-40d1-af38-b261bc07adc9_1600x1092.png 1272w, https://substackcdn.com/image/fetch/$s_!YQNj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cf6156-24c7-40d1-af38-b261bc07adc9_1600x1092.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YQNj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cf6156-24c7-40d1-af38-b261bc07adc9_1600x1092.png" width="1456" height="994" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e2cf6156-24c7-40d1-af38-b261bc07adc9_1600x1092.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:994,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!YQNj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cf6156-24c7-40d1-af38-b261bc07adc9_1600x1092.png 424w, https://substackcdn.com/image/fetch/$s_!YQNj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cf6156-24c7-40d1-af38-b261bc07adc9_1600x1092.png 848w, https://substackcdn.com/image/fetch/$s_!YQNj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cf6156-24c7-40d1-af38-b261bc07adc9_1600x1092.png 1272w, https://substackcdn.com/image/fetch/$s_!YQNj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cf6156-24c7-40d1-af38-b261bc07adc9_1600x1092.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">New Planet by Konstantin Yuon (1921)</figcaption></figure></div><p>In 1962, Victor Glushkov pitched the Soviet authorities on a nationwide cybernetics network to solve the oldest problem in socialist economics: how to allocate resources without private property and market prices. Washington was alarmed enough for the CIA to create a special taskforce.</p><p>Just as Soviet industrialization was giving way to stagnation, the first serious computers gave the planned economy a shot in the arm. The central planners believed that they&#8217;d struggled to process information fast enough, so increased computing power promised a solution.</p><p>In 1969, the Polish economist Oskar Lange described the market as &#8220;a computing device of the pre-electronic age,&#8221; suggesting we could put &#8220;simultaneous equations on an electronic computer and obtain the solution in less than half a second.&#8221; In 1971, Chile&#8217;s socialist government attempted a version of this managed via telex machine, only for it to be cut short by the coup of 1973.</p><p>Thirty years after the fall of the Soviet Union, enthusiasm for these ideas has returned. In 2016, Jack Ma <a href="https://www.sciencedirect.com/science/article/abs/pii/S0167268122004048#:~:text=Over%20the%20past%20100%20years,technosocialism%20for%20the%2021st%20century.">predicted that</a> &#8220;the planned economy will become increasingly big &#8230; because with access to all kinds of data, we may be able to find the invisible hand of the market.&#8221; Marxist economists have written <a href="https://www.versobooks.com/en-gb/products/636-the-people-s-republic-of-walmart">with surprising enthusiasm</a> about Walmart and Amazon, viewing them as technologically-enabled planned economies.</p><p>The greatest excitement has been reserved for advanced AI. Zvi Mowshowitz <a href="https://www.lesswrong.com/posts/kqz4EH3bHdRJCKMGk/ai-106-not-so-fast">has argued</a> that AI &#8220;can embody the preferences and knowledge of many or even all humans, in a way an individual human or group of humans never could.&#8221; Meanwhile, Erik Brynjolfsson and Zo&#235; Hitzig <a href="https://www.nber.org/system/files/chapters/c15303/revisions/c15303.rev0.pdf">have made the case that</a>, by combining immense processing capacity with the ability to codify tacit knowledge through computer vision, language, and sensor data, AI could erode the traditional advantages of decentralization.</p><p>The optimists attack the case for traditional markets and decentralization from multiple directions: AI can match or exceed the information-processing advantages of markets, capture knowledge embedded in human judgment, simulate competition without running it, assess outcomes markets model badly through proxies, or simply replace the human participants whose limitations created the problem in the first place.</p><p>Despite their diversity, many of these arguments fall into the same traps. They routinely misstate the case for decentralization and flatten the distinction between different kinds of knowledge, while treating any unsolved problems as an engineering detail.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe to Cosmos Institute for updates including opportunities, content, and programs</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>The pursuit of knowledge </h3><p>The most influential case against central planning was made on epistemological grounds by Friedrich Hayek. Oskar Lange and many of his successors read him as making a simple point about transmission: the useful knowledge in any economy is spread across millions of minds, so no central authority can collect it fast enough to act on it.</p><p>This is dangerously wrong. In <a href="https://oll.libertyfund.org/publications/reading-room/hayek-boll-12-f-a-hayek-the-use-of-knowledge-in-society-1945">&#8220;The Use of Knowledge in Society,&#8221;</a> Hayek distinguishes between two different types of knowledge. The first is scientific or theoretical knowledge, which can be stated in general rules or principles. In theory, this kind of knowledge could be effectively concentrated in a single mind or system &#8211; arguably, LLMs already do this very well.</p><p>The second type is what he calls &#8220;knowledge of the particular circumstances of time and place.&#8221; This is knowledge that is embedded in practice, judgment, and context. For example, a farmer may know that a specific field drains poorly in its southeast corner or a sales rep may notice a change in body language with a long-standing client. This kind of knowledge is derived from the experience of a specific context, rather than from theoretical training or by performing a regression analysis. The person who has this knowledge will frequently struggle to explain how they acquired it.</p><p>This concept of tacit knowledge was expressed in more detail by Michael Polanyi, a chemist turned philosopher of science. Polanyi famously formulated tacit knowledge as the idea that &#8220;we can know more than we can tell.&#8221; There are lots of things we can do that we would struggle to articulate. The theoretical account of riding a bike &#8211; adjusting angular momentum through micro-corrections in steering &#8211; bears little relationship to the knowledge that you possess and exercise when doing it.</p><p>Polanyi&#8217;s view is that tacit knowledge is not just knowledge that happens to be unstated, but instead has a distinctive architecture.</p><p>Imagine a bank manager in a meeting with a local business owner, asking for an extension to their loan. In the course of this interaction, she senses that something isn&#8217;t right.</p><p>The bank manager is picking up on a series of cues, such as the business owner&#8217;s posture, the rhythm of his speech, or the differences from their past interactions. The bank manager experiences all of this as a single act of perception, rather than a series of data points. If we tried to unpick this knowledge and asked the bank manager to list out all the data points she picked up on, it would be akin to asking a pianist to state the precise angle of each finger while she&#8217;s playing.</p><p>Of course, the planner at this point could argue that even if the bank manager can&#8217;t articulate these cues, we could train a model across video, audio, and biometrics to detect the same patterns she&#8217;s detecting. The model doesn&#8217;t need to have the same experience as her, it just needs to produce the same outputs. For example, driving was long held up as an example of inalienable tacit knowledge; Polanyi himself argued that &#8220;the skill of a driver cannot be replaced by a thorough schooling in the theory of the motorcar.&#8221; Despite this, we now have highly performant self-driving cars.</p><p>This only tells us so much. Driving is mostly about seeing things and moving your body in response. The environment is physically constrained and the action space is narrow. While complicated, it is markedly less ambiguous than navigating a dense web of human intentions and social meanings. AI excels at chess, but falters in complicated social reasoning games. The market is far more like the latter.</p><p>The distinction here goes beyond difficulty. A car navigates spatial relationships and physical dynamics, whereas what the bank manager does is categorically different: she is interpreting, drawing on a framework of meaning built up through years of situated experience that organizes her perception before any calculation begins. That framework is the structure through which the interaction becomes intelligible to her at all. More processing power has no bearing on a gap like this.</p><p>By being situated in both the conversation and the wider social context, our bank manager also has a few other advantages versus an impersonal system. For a start, she has skin in the game. If she gets this call wrong, her reputation and business could suffer; in extreme cases, her physical safety could be at risk. Secondly, unlike a system observing a bunch of cues, she is an active participant in the interaction. Her tone and method of formulating questions are all part of the interaction. Finally, she&#8217;s situated in the community. She knows both the social norms and the realities of running a business in that area.</p><h3>The snake devours its tail</h3><p>Eventually, we get stuck in a loop. Even if we could train an AI on these interactions, what would the training data consist of? Someone has to decide to record certain things and not others. For example, we may include the transcript of the conversation, some financial metrics, and the outcome of the loan, but not the handshake or some of the pauses between words. The data is already a selective compression of the interaction, shaped by prior human decisions about what matters.</p><p>The tacit knowledge that made those framing decisions is invisible to the system trained on their outputs. No dataset encounters raw reality. It arrives pre-shaped by decisions about what to measure and what to discard. When you tell the system to look for indicators of trustworthiness, you&#8217;ve already decided what the relevant features are, which is precisely the judgment you were hoping the system would replicate.</p><p>We make direct perceptual contact with the world in a way that AI can&#8217;t. We determine the very concepts needed to carve up the world intelligibly, invent new ones constantly, and make normative and aesthetic judgments all throughout. If the data is always post-conceptual, then every training pipeline inherits the tacit knowledge of whoever decided what to measure, which means the system can never fully escape human judgment, even in principle.</p><p>The optimist could object here. Perhaps a reinforcement learning function could create something functionally equivalent to skin in the game. A dynamic AI system interacting with clients would also develop its own tone and method of probing. Big enough systems don&#8217;t simply compute over pre-specified data &#8211; they learn from experience and may develop something akin to internal world models. They may absorb tacit knowledge wholesale without anyone specifying what to look for.</p><p>Even granting all of that, the question is whether it can replicate the model of engagement that produces our bank manager&#8217;s particular sensitivity. She must sit inside the tension between reputational risk, relationship preservation, institutional obligation, and commercial judgment without the ability to collapse them into a single metric. This is what makes her attention so acute &#8211; she can&#8217;t keep everyone happy.</p><p>This points to something general. A system that lacks our direct perceptual access to the world can detect statistical regularities within whatever framework it&#8217;s been given. It won&#8217;t recognize the moment when existing categories fail to capture what&#8217;s really happening, because its measure of &#8220;what matters&#8221; is determined by the framework. A reward function can&#8217;t evaluate its own weights. Only someone embedded in the situation &#8211; who personally feels the weight of competing, immeasurable human risks, obligations, and norms &#8211; is capable of making that judgment.</p><h3>Information versus action</h3><p>The economist Israel Kirzner <a href="https://econjwatch.org/file_download/70/2005-04-kirzner-sympos.pdf?mimetype=pdf">tells the story</a> of a mother struggling with a teething child. The mother has tried everything she can to soothe or pacify the child, but to no avail. A travelling salesman knocks on her door and offers her a colorful toy at a price of five dollars. Her child is delighted and calms down. On closer inspection, she is dismayed to realize that the toy is nothing more than a collection of marbles in a clear plastic container &#8211; something that she could have assembled in her kitchen for less than a dollar. She &#8220;could kick herself for not having done so.&#8221;</p><p>On one level, this is understandable. The mother knew she had marbles in her kitchen and had the wherewithal to put them in a container, but this knowledge &#8220;did not inspire her to action.&#8221; In Kirzner&#8217;s framing, she had information-knowledge (all the facts), but not action-knowledge (the alertness to act on the information).</p><p>Even if you build a system that can collect every production function and every consumer preference in the economy, all you have done is assemble a comprehensive stock of information-knowledge. Alertness is hard to program, because it isn&#8217;t a process of inference from known data. A successful entrepreneur acts speculatively in response to a suspected opportunity, which by definition, hasn&#8217;t already been recognized.</p><p>Hayek went further. He <a href="https://cdn.mises.org/qjae5_3_3.pdf">observed that</a> much of the knowledge that matters for economic coordination doesn&#8217;t exist at all until the competitive process generates it.</p><p>If you are an entrepreneur who tries a new approach, whether you succeed or fail, you have created new knowledge. As a result of the risk you took, people know whether a specific combination of resources, aimed at a set of customers, at a particular price point is viable. This knowledge didn&#8217;t exist somewhere in the ether waiting to be discovered by a more powerful algorithm. Instead, it was brought into existence by speculation in conditions of genuine uncertainty. The firm that succeeds reveals the value of its approach retroactively.</p><p>The price signals that result are not the transmissions of pre-existing data, but the outputs of a distinct process. The indeterminacy runs deeper still: economic agents are not carrying fixed utility functions. Participation in exchange changes the participants &#8211; human interactions themselves generate new kinds of choices. The system a planner would need to model doesn&#8217;t hold still, because the process of market coordination is partly constitutive of the preferences and knowledge it produces.</p><p>Akio Morita, the co-founder of Sony, launched the Walkman in 1979 against the objections of the rest of the company. Morita had observed that people took large stereos to the beach and listened to music in their cars, and sensed a market opportunity. Sony&#8217;s own market research and consumer surveys consistently suggested that there was no consumer demand for a tape player that couldn&#8217;t record, no matter how portable it was.</p><p>What Morita did was not pattern recognition on a richer dataset. He changed the conceptual space, reconceiving what a music device could be and who it could be for. No amount of data about what consumers said they wanted would have produced the Walkman, because the preference for it was partly a consequence of the product&#8217;s existence. The market revealed his conjecture to have value, and it now retrospectively seems obvious. But obviousness after the fact is the signature of knowledge that could only have been created through action.</p><p>The launch of the Walkman then had a series of downstream consequences &#8211; for competitors, component manufacturers, and music sales alike. The relationship between music and everyday life changed, with enduring social and economic consequences.</p><h3>If it ain&#8217;t broke</h3><p>Even if many of the foundational technological challenges could be solved by a future system, we would also need to believe that it would be worth the risk. An entrepreneur who bets wrong loses his own capital. A society that dismantles its price system has made an irreversible collective wager. We would need to have confidence that this future model would outperform markets &#8211; an order that, given basic institutions, no one has to design.</p><p>For the traditional Marxist, the case is straightforward. If technology can finally solve the calculation problem, it vindicates the claim that capitalism contains the seeds of its own succession. But as we saw earlier, the AI alternative to traditional markets increasingly appeals to those who don&#8217;t hope for the final triumph of the proletariat.</p><p>Oliver Klingefjord and Joe Edelman from the Meaning Alignment Institute <a href="https://meaningalignment.substack.com/p/market-intermediaries-a-post-agi">argue that</a> advanced AI systems could correct a number of shortcomings in current markets. They believe that markets systematically contract on proxy metrics like hours, subscriptions, engagement, rather than outcomes actually delivered. This is partly because doing so would be prohibitively expensive across millions of consumers, but also because there is an asymmetry of power between suppliers and consumers. Big suppliers can write &#8220;take it or leave it&#8221; contracts and have huge information advantages versus those that they are contracting with.</p><p>Klingefjord and Edelman argue that replacing markets with AI could collapse these measurement and bargaining costs, making it feasible to pay suppliers for delivered benefit via competitive, <a href="https://blog.cosmos-institute.org/p/coasean-bargaining-at-scale">voluntary AI intermediaries</a> that pool consumers, assess outcomes qualitatively, and negotiate enterprise-level deals.</p><p>This is much more sophisticated than technosocialism, but runs into some of the same problems.</p><p>This approach tries to maintain a price system, but changes what the prices track. But this entire system relies on an intermediary being able to assess whether a good outcome was delivered. Unlike market prices, which emerge from entrepreneurial bids under genuine uncertainty and personal risks, these assessed prices would reflect a system&#8217;s operationalized definition of human benefit. While the user might set the guardrails, the system has to turn these into assessable criteria. In essence, it&#8217;s a system&#8217;s approximation of a person&#8217;s approximation of what constitutes human flourishing. </p><p>This would also result in an enormous amount of discretionary judgment embedded in an infrastructure layer that most people would never inspect. While you could try to mitigate this by having a world of competing AI intermediaries, it&#8217;s hard to see users choosing between rival theories of their own good &#8211; as operationalized by AI systems that they can&#8217;t design &#8211; as an obvious improvement on choosing between rival products in a traditional market.</p><h3>Magical thinking</h3><p>It is, of course, possible to argue that these objections could all be overcome. Maybe we will build systems that can collect all dispersed and local knowledge, model genuine alertness, simulate exchange, and anticipate the outputs of the price discovery process without running it. It may then lead to more efficient resource allocation. But we haven&#8217;t built these systems, and nothing in the current trajectory suggests we are close. The burden of proof lies with those who believe otherwise.</p><p>The case for planning, by necessity, assumes away all real-world constraints while simultaneously reversing the burden of proof. In response to arguments about the importance of markets, it hypothesizes a system that by stipulation overcomes any individual objection and then challenges opponents to prove that it&#8217;s impossible.</p><p>Thought experiments that grant one or two premises can be genuinely useful when thinking about advanced technology, but with each additional hypothetical, the value diminishes. Any theory of how the world could work becomes plausible once you assume the existence of an omniscient machine god.</p><p>In the end, the CIA didn&#8217;t need to worry about Soviet cybernetics. Glushkov&#8217;s proposed cybernetics system, budgeted to cost the equivalent of over one trillion dollars in today&#8217;s money, never saw the light of day. The Soviet authorities, unconvinced by his argument that the system would pay for itself several times over, balked at the price tag. Cybernetics would not get to play its role in the inevitable triumph of the proletariat.</p><p>It seemed that even the Soviets had lost faith in the planned economy. On this, if little else, they were right, even if it was not for reasons that they fully understood. The fundamental obstacle was never processing power or data collection. It was that the economy a planner would need to model is constitutively shaped by the expectations and interpretive frameworks of the people who participate in it. Those frameworks shift in response to the very act of observation and intervention. There is no fixed economy waiting to be measured. </p><p>The system that a planner would need to model is the same system that the plan would destroy.</p><div><hr></div><p><em><a href="https://cosmos-institute.org/">Cosmos Institute</a> is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund fast prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for updates and essays</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[What will you build for?]]></title><description><![CDATA[Notes from the first Cosmos Symposium]]></description><link>https://blog.cosmos-institute.org/p/what-will-you-build-for</link><guid isPermaLink="false">https://blog.cosmos-institute.org/p/what-will-you-build-for</guid><dc:creator><![CDATA[Cosmos Institute]]></dc:creator><pubDate>Fri, 06 Mar 2026 17:40:13 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!-X-W!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd958f8c9-5be3-41ee-9cc5-218185a6375f_1600x1200.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Archers are more likely to hit a clear target. Aristotle <a href="https://classics.mit.edu/Aristotle/nicomachaen.1.i.html#:~:text=If%2C%20then%2C%20there,what%20is%20right%3F">reminds us</a> that the same is true of our life&#8217;s work: we will only be able to build with meaning and purpose if we know what we are building for.</p><p>This question was on our minds as we gathered 120 thinkers and builders from frontier AI labs, top universities, and cutting-edge institutions for the first Cosmos Symposium. Among them were the creator of Fortnite, category theorists, the principal of <a href="https://alpha.school/">Alpha School</a>, researchers from OpenAI and DeepMind, and first-time founders building for human autonomy</p><p>But as Brendan said in his opening remarks, no one in the room, or indeed the world, is yet a <a href="https://blog.cosmos-institute.org/p/the-philosopher-builder">philosopher-builder</a>. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4H7P!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77db3e65-fc9a-4c2b-a4bc-a44b4f0be69e_800x533.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4H7P!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77db3e65-fc9a-4c2b-a4bc-a44b4f0be69e_800x533.jpeg 424w, https://substackcdn.com/image/fetch/$s_!4H7P!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77db3e65-fc9a-4c2b-a4bc-a44b4f0be69e_800x533.jpeg 848w, https://substackcdn.com/image/fetch/$s_!4H7P!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77db3e65-fc9a-4c2b-a4bc-a44b4f0be69e_800x533.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!4H7P!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77db3e65-fc9a-4c2b-a4bc-a44b4f0be69e_800x533.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4H7P!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77db3e65-fc9a-4c2b-a4bc-a44b4f0be69e_800x533.jpeg" width="800" height="533" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/77db3e65-fc9a-4c2b-a4bc-a44b4f0be69e_800x533.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:533,&quot;width&quot;:800,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;EE0A5224.jpg&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="EE0A5224.jpg" title="EE0A5224.jpg" srcset="https://substackcdn.com/image/fetch/$s_!4H7P!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77db3e65-fc9a-4c2b-a4bc-a44b4f0be69e_800x533.jpeg 424w, https://substackcdn.com/image/fetch/$s_!4H7P!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77db3e65-fc9a-4c2b-a4bc-a44b4f0be69e_800x533.jpeg 848w, https://substackcdn.com/image/fetch/$s_!4H7P!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77db3e65-fc9a-4c2b-a4bc-a44b4f0be69e_800x533.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!4H7P!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77db3e65-fc9a-4c2b-a4bc-a44b4f0be69e_800x533.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Brendan McCord (Cosmos Institute founder) opening the Symposium</figcaption></figure></div><p>Over the past two and a half millennia, there have only been a handful of true philosophers who pursued fundamental questions with courage and relentlessness. The number of people who have then applied this energy to institution-building that enhances freedom and inquiry for the individual is even smaller; the best example is Benjamin Franklin.</p><p>Nevertheless, there was no shortage of ambition. During the event, attendees shared what they were building on a whiteboard. The responses varied significantly, ranging across human goods like strengthened community, augmented intelligence, and a more enriched life of the mind. Underneath this difference in emphasis was a shared belief that we are building towards a greater goal. Everyone in the room saw more capable technology as a means, not an end.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Z-nt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0326784f-07de-4aa7-8135-d2ecc92acc55_1280x853.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Z-nt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0326784f-07de-4aa7-8135-d2ecc92acc55_1280x853.png 424w, https://substackcdn.com/image/fetch/$s_!Z-nt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0326784f-07de-4aa7-8135-d2ecc92acc55_1280x853.png 848w, https://substackcdn.com/image/fetch/$s_!Z-nt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0326784f-07de-4aa7-8135-d2ecc92acc55_1280x853.png 1272w, https://substackcdn.com/image/fetch/$s_!Z-nt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0326784f-07de-4aa7-8135-d2ecc92acc55_1280x853.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Z-nt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0326784f-07de-4aa7-8135-d2ecc92acc55_1280x853.png" width="1280" height="853" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0326784f-07de-4aa7-8135-d2ecc92acc55_1280x853.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:853,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Z-nt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0326784f-07de-4aa7-8135-d2ecc92acc55_1280x853.png 424w, https://substackcdn.com/image/fetch/$s_!Z-nt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0326784f-07de-4aa7-8135-d2ecc92acc55_1280x853.png 848w, https://substackcdn.com/image/fetch/$s_!Z-nt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0326784f-07de-4aa7-8135-d2ecc92acc55_1280x853.png 1272w, https://substackcdn.com/image/fetch/$s_!Z-nt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0326784f-07de-4aa7-8135-d2ecc92acc55_1280x853.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Before the event, everyone nominated a book that shaped their view of the world. Many chose the classics &#8211; Plato&#8217;s <em>Republic</em>, Smith&#8217;s <em>Wealth of Nations </em>&#8211; but we also had science fiction, writings from theologian John Henry Newman, and a history of American nuclear power.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BQrQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80e0322b-4de3-4464-b660-069794da4a05_1200x1600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BQrQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80e0322b-4de3-4464-b660-069794da4a05_1200x1600.png 424w, https://substackcdn.com/image/fetch/$s_!BQrQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80e0322b-4de3-4464-b660-069794da4a05_1200x1600.png 848w, https://substackcdn.com/image/fetch/$s_!BQrQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80e0322b-4de3-4464-b660-069794da4a05_1200x1600.png 1272w, https://substackcdn.com/image/fetch/$s_!BQrQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80e0322b-4de3-4464-b660-069794da4a05_1200x1600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BQrQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80e0322b-4de3-4464-b660-069794da4a05_1200x1600.png" width="1200" height="1600" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/80e0322b-4de3-4464-b660-069794da4a05_1200x1600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1600,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BQrQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80e0322b-4de3-4464-b660-069794da4a05_1200x1600.png 424w, https://substackcdn.com/image/fetch/$s_!BQrQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80e0322b-4de3-4464-b660-069794da4a05_1200x1600.png 848w, https://substackcdn.com/image/fetch/$s_!BQrQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80e0322b-4de3-4464-b660-069794da4a05_1200x1600.png 1272w, https://substackcdn.com/image/fetch/$s_!BQrQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80e0322b-4de3-4464-b660-069794da4a05_1200x1600.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>To our surprise, one of the most nominated books by attendees was Martin Heidegger&#8217;s famously challenging <em>Being and Time</em>. In <em>Being and Time</em>, Heidegger rejected Ren&#233; Descartes&#8217;s traditional account of consciousness, which depicts a detached mind peering out at an external world. Instead, Heidegger insisted that we are already caught up in the world, acting on it before we ever step back to think about it. At a time when it can feel like the development of technology <a href="https://blog.cosmos-institute.org/p/technocalvinism">is following a fixed path</a> and the best we can do is act as users or spectators, he may have something to teach us.</p><p>As only 120 of us could be in the room, we&#8217;re sharing a few of our main takeaways from the event.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe to Cosmos Institute for updates including opportunities, content, and programs</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>1. The importance of &#8216;becoming&#8217;</h3><p>Shortly before Adam Smith died, he ordered that 16 volumes of his manuscripts be burned. He was so anxious that only his best thinking survive that he destroyed the record of his own becoming: the drafts, notes, abandoned arguments we might have learned from</p><p>We tried to do the reverse of this at the Symposium by bringing people together at different stages of their journey. This included:</p><ul><li><p>Highly accomplished builders, who are pursuing mission-driven projects and can act as inspiration, such as Joe Liemandt, the principal of <a href="https://www.johnathanbi.com/p/the-incredible-results-of-ai-learning-485">Alpha School</a> or Paul Meegan, the creator of Fortnite and founder of a new stealth venture;</p></li><li><p>Former researchers and engineers at leading tech companies and frontier labs, now building for human goods, like <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Alex Komoroske&quot;,&quot;id&quot;:1781724,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/f8504ce7-f35f-4042-b6a6-69a12d3e3c34_938x938.jpeg&quot;,&quot;uuid&quot;:&quot;4d52f6cc-edb2-4c49-ba44-c96288039826&quot;}" data-component-name="MentionToDOM"></span>, a 15-year Google and Stripe veteran and <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Ivan Vendrov&quot;,&quot;id&quot;:1594707,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ff8ad920-af93-4fb6-97c6-82353e43ca61_172x172.jpeg&quot;,&quot;uuid&quot;:&quot;b1fb8302-67e2-4cf3-9785-afbb2c73b3fd&quot;}" data-component-name="MentionToDOM"></span>, a former Anthropic researcher and ex-head of collective intelligence at Midjourney;</p></li><li><p>First-time founders who are <a href="https://intelligence-curse.ai/">building for human autonomy</a>, such as Workshop Labs co-founder <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Luke Drago&quot;,&quot;id&quot;:6095696,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!BklC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf94888e-7a4a-42e1-842c-225c8b196263_762x762.png&quot;,&quot;uuid&quot;:&quot;4a99a04f-0b60-4b49-8bfc-e96a385ee58f&quot;}" data-component-name="MentionToDOM"></span>; </p></li><li><p>World-class intellectuals who are helping to lay the crucial foundations for future builders and thinkers, such as <a href="https://hailab.ox.ac.uk/">Philipp Koralus</a> at the Oxford HAI Lab, <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Rebecca Lowe&quot;,&quot;id&quot;:39035392,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3428e40d-4579-4fd5-ac94-e1d2e1c1a60f_1177x1137.jpeg&quot;,&quot;uuid&quot;:&quot;d9336d5c-7c99-4e05-b49c-bd802865385e&quot;}" data-component-name="MentionToDOM"></span> at the Mercatus Center, UT Austin&#8217;s <a href="https://www.harveylederman.com/">Harvey Lederman,</a> <a href="https://catherineproject.org/">Catherine Project</a> founder <a href="https://blog.cosmos-institute.org/p/what-will-you-build-for-zena-hitz">Zena Hitz</a>, or <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Seth Lazar&quot;,&quot;id&quot;:34920403,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5ac676fa-7fc6-417e-b0de-83734770113b_1288x1072.jpeg&quot;,&quot;uuid&quot;:&quot;a822d9bd-de0c-4c09-8c2e-32d14e0f7233&quot;}" data-component-name="MentionToDOM"></span>, <a href="https://mintresearch.org/">founder of the Machine Intelligence and Normative Theory Lab</a>.</p></li></ul><p>All of these groups had something to learn from each other, whether it was hard-won knowledge about what it actually takes to ship something, fresh insights from the frontier of AI research, or clarity about what&#8217;s worth building and why.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NSiz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5a3448b-068e-4b36-a15b-07f69846baca_1600x1067.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NSiz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5a3448b-068e-4b36-a15b-07f69846baca_1600x1067.png 424w, https://substackcdn.com/image/fetch/$s_!NSiz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5a3448b-068e-4b36-a15b-07f69846baca_1600x1067.png 848w, https://substackcdn.com/image/fetch/$s_!NSiz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5a3448b-068e-4b36-a15b-07f69846baca_1600x1067.png 1272w, https://substackcdn.com/image/fetch/$s_!NSiz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5a3448b-068e-4b36-a15b-07f69846baca_1600x1067.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NSiz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5a3448b-068e-4b36-a15b-07f69846baca_1600x1067.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e5a3448b-068e-4b36-a15b-07f69846baca_1600x1067.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NSiz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5a3448b-068e-4b36-a15b-07f69846baca_1600x1067.png 424w, https://substackcdn.com/image/fetch/$s_!NSiz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5a3448b-068e-4b36-a15b-07f69846baca_1600x1067.png 848w, https://substackcdn.com/image/fetch/$s_!NSiz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5a3448b-068e-4b36-a15b-07f69846baca_1600x1067.png 1272w, https://substackcdn.com/image/fetch/$s_!NSiz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5a3448b-068e-4b36-a15b-07f69846baca_1600x1067.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The opening panel, featuring Joe Liemandt (Alpha School), Danielle Perzyk (Amazon AGI), Alex Komoroske (Common Tools), Houda Nait el Barj (OpenAI) in discussion with Harry Law.</figcaption></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YOvn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd221e7fb-5e3f-4e71-b1c0-a31d7a6964cf_1600x1067.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YOvn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd221e7fb-5e3f-4e71-b1c0-a31d7a6964cf_1600x1067.png 424w, https://substackcdn.com/image/fetch/$s_!YOvn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd221e7fb-5e3f-4e71-b1c0-a31d7a6964cf_1600x1067.png 848w, https://substackcdn.com/image/fetch/$s_!YOvn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd221e7fb-5e3f-4e71-b1c0-a31d7a6964cf_1600x1067.png 1272w, https://substackcdn.com/image/fetch/$s_!YOvn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd221e7fb-5e3f-4e71-b1c0-a31d7a6964cf_1600x1067.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YOvn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd221e7fb-5e3f-4e71-b1c0-a31d7a6964cf_1600x1067.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d221e7fb-5e3f-4e71-b1c0-a31d7a6964cf_1600x1067.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!YOvn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd221e7fb-5e3f-4e71-b1c0-a31d7a6964cf_1600x1067.png 424w, https://substackcdn.com/image/fetch/$s_!YOvn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd221e7fb-5e3f-4e71-b1c0-a31d7a6964cf_1600x1067.png 848w, https://substackcdn.com/image/fetch/$s_!YOvn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd221e7fb-5e3f-4e71-b1c0-a31d7a6964cf_1600x1067.png 1272w, https://substackcdn.com/image/fetch/$s_!YOvn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd221e7fb-5e3f-4e71-b1c0-a31d7a6964cf_1600x1067.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Paul Meegan (creator of Fortnite) in conversation with Johnathan Bi.</figcaption></figure></div><h3>2. Building the philosophy to code pipeline</h3><p>Philosophy can feel remote from the realities of shipping a product. It can be easy to default to obvious ideas, like introducing greater personalization through a system prompt. Attendees explored options that show why steerability is essential but insufficient.</p><p>These included tools that surface the assumptions a model makes about us, and ways of aligning models with our second order preferences: not just what we want, but <a href="https://blog.cosmos-institute.org/p/what-you-want-to-want">what we want to want</a>.</p><p>Some of these projects are still in their early stages, so we can&#8217;t share too much detail at the moment, but we were grateful to be joined by members of the Cosmos community who are entering institution-building mode. These included <a href="https://samuelemarro.it/">Samuele Marro</a> of the <a href="https://x.com/idai_institute/status/1965059958287810908">Institute of Decentralized AI</a>, <a href="https://substack.com/@elianmccarron">Elian McCarron</a> of <a href="https://www.kanonic.ai/">Kanonic</a>, and <a href="https://jasminexli.com/">Jasmine Li</a>, who is starting an organization focused on measuring and forecasting human agency as AI capabilities advance.</p><div class="image-gallery-embed" data-attrs="{&quot;gallery&quot;:{&quot;images&quot;:[{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b1131ad1-06d0-4010-802c-ca9c27faba45_6000x4000.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/799ae26a-8f8f-44eb-af94-a465d92988be_6000x4000.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5a09113e-2afc-41c6-8d08-d5a73f2776b8_5126x3417.jpeg&quot;}],&quot;caption&quot;:&quot;&quot;,&quot;alt&quot;:&quot;&quot;,&quot;staticGalleryImage&quot;:{&quot;type&quot;:&quot;image/png&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0776f6f6-ea72-42fd-8541-85fc6e51372d_1456x474.png&quot;}},&quot;isEditorNode&quot;:true}"></div><h3>3. Epistemic humility </h3><p>As one of our speakers observed, autonomy requires humility. The philosopher-builder will need to constantly ask whether they are doing the right thing and course-correct when the answer is no.</p><p><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Elsie Jang&quot;,&quot;id&quot;:6223062,&quot;type&quot;:&quot;pub&quot;,&quot;url&quot;:&quot;https://open.substack.com/pub/elsiejang1&quot;,&quot;photo_url&quot;:null,&quot;uuid&quot;:&quot;69b65cba-61da-424e-b37b-0afbfc434b7b&quot;}" data-component-name="MentionToDOM"></span> from the Mercatus Center <a href="https://elsiejang1.substack.com/p/underdetermination-at-the-frontier">recently wrote an essay</a> on how the same evidence in frontier AI debates can justify sharply conflicting hypotheses. Despite all of this talent in one room, none of us left Austin feeling certain.</p><p>The discussions raised vital questions that lack a definite answers:</p><ul><li><p>How can we build for second order preferences without defaulting to paternalism?</p></li><li><p>How can we prioritize human autonomy, while using capital markets to reach hundreds of millions of people?</p></li><li><p>What are the limits of decentralization? What high-level coordination or condition-setting is needed to create useful, coherent action in the real world?</p></li></ul><p>Some debates, especially around frontier AI, can feel tribal, which is why it&#8217;s important for communities to build strong norms. At the symposium, we saw people treat conversations as conversations. Each interaction felt like an opportunity for two people to learn, and we received no reports of <a href="https://www.anthropic.com/news/detecting-and-preventing-distillation-attacks">distillation attacks</a>.</p><div class="image-gallery-embed" data-attrs="{&quot;gallery&quot;:{&quot;images&quot;:[{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8d208f60-3aa3-4645-ac84-f715321fdf1e_4705x3137.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/35f40448-d4f1-4fa7-9766-d4d9b1480fda_6000x4000.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6184b918-fbd2-4a32-b1d8-1733be45fda9_6000x4000.jpeg&quot;}],&quot;caption&quot;:&quot;&quot;,&quot;alt&quot;:&quot;&quot;,&quot;staticGalleryImage&quot;:{&quot;type&quot;:&quot;image/png&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/219d03af-cf04-49e7-9e0a-f9af7578ec81_1456x474.png&quot;}},&quot;isEditorNode&quot;:true}"></div><h3>4. Structure and spontaneity </h3><p>Nietzsche thought Greek cultural greatness stemmed from the fusion of the Apollonian and the Dionysian. Apollo, the god of light, represented clarity, structure, and rational boundary-making. Meanwhile, Dionysus, the god of wine-making, represented the dissolution of those boundaries.</p><p>In our own modest way, we attempted to replicate this formula. Alongside our structured programming, attendees had time to mingle over drinks, dinner, and across the sprawling, beautiful grounds of the venue. The best conversations were often the spontaneous ones, still running late on Sunday morning, long after the formal programming ended.</p><div class="image-gallery-embed" data-attrs="{&quot;gallery&quot;:{&quot;images&quot;:[{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d958f8c9-5be3-41ee-9cc5-218185a6375f_1600x1200.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cfbf0215-7ffb-4ba1-9852-ed465d3ffabb_5242x3495.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ca8a8476-2688-4b0e-ac8c-3f4ef9d9c55f_6000x4000.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c696bb97-30e1-46cb-a24f-09d243cb9830_6000x4000.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0f6661ec-5840-4607-acb6-0c3c42439706_5223x3482.jpeg&quot;}],&quot;caption&quot;:&quot;&quot;,&quot;alt&quot;:&quot;&quot;,&quot;staticGalleryImage&quot;:{&quot;type&quot;:&quot;image/png&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dace9ee1-21a2-4e3d-9a64-dd2e6eb42ed0_1456x1210.png&quot;}},&quot;isEditorNode&quot;:true}"></div><p>This was the first Cosmos Symposium, but it won&#8217;t be the last. The Cosmos community is just over a year old and we&#8217;re grateful to everyone who has contributed to what we&#8217;re building.</p><p>If you&#8217;re interested in attending a future Cosmos event, <a href="https://cosmosinst.typeform.com/education?typeform-source=blog.cosmos-institute.org">register your interest on our seminars and events form</a>.</p><div><hr></div><p><a href="https://cosmos-institute.org/">Cosmos Institute</a> is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.</p><p>If you&#8217;re someone who thinks deeply, builds deliberately, and cares about the future AI is shaping&#8212;<a href="https://cosmos-institute.org/">join the Cosmos network</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.cosmos-institute.org/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Not Even Wrong]]></title><description><![CDATA[The Problem With P(doom)]]></description><link>https://blog.cosmos-institute.org/p/not-even-wrong</link><guid isPermaLink="false">https://blog.cosmos-institute.org/p/not-even-wrong</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Fri, 27 Feb 2026 16:02:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!-nsB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd13fb1ed-61f1-4970-912f-30770ee3341b_1222x862.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-nsB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd13fb1ed-61f1-4970-912f-30770ee3341b_1222x862.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-nsB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd13fb1ed-61f1-4970-912f-30770ee3341b_1222x862.png 424w, https://substackcdn.com/image/fetch/$s_!-nsB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd13fb1ed-61f1-4970-912f-30770ee3341b_1222x862.png 848w, https://substackcdn.com/image/fetch/$s_!-nsB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd13fb1ed-61f1-4970-912f-30770ee3341b_1222x862.png 1272w, https://substackcdn.com/image/fetch/$s_!-nsB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd13fb1ed-61f1-4970-912f-30770ee3341b_1222x862.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-nsB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd13fb1ed-61f1-4970-912f-30770ee3341b_1222x862.png" width="1222" height="862" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d13fb1ed-61f1-4970-912f-30770ee3341b_1222x862.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:862,&quot;width&quot;:1222,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-nsB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd13fb1ed-61f1-4970-912f-30770ee3341b_1222x862.png 424w, https://substackcdn.com/image/fetch/$s_!-nsB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd13fb1ed-61f1-4970-912f-30770ee3341b_1222x862.png 848w, https://substackcdn.com/image/fetch/$s_!-nsB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd13fb1ed-61f1-4970-912f-30770ee3341b_1222x862.png 1272w, https://substackcdn.com/image/fetch/$s_!-nsB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd13fb1ed-61f1-4970-912f-30770ee3341b_1222x862.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>The Fall of the Rebel Angels</em> by Pieter Brueghel the Elder (1562)</figcaption></figure></div><p>John Graunt was a merchant who lived in 17th century London. In 1662, in a city familiar with plague, Graunt consulted parish death records to figure out how long Londoners could expect to live. He could tell you that roughly 36 out of every 100 children born in the city would die before their sixth birthday. This probability could be checked by anyone willing to do the same sums using the same sources.</p><p>Three and a half centuries later, AI researchers are calculating a morbid probability of their own. They wonder about thinking machines seeing off the human race, and refer to the likelihood of that possibility as P(doom). The term began life as an inside joke in rationalist spaces in the closing years of the 2000s, but shot to prominence after researchers signed an <a href="https://aistatement.com/">open letter</a> about the threat posed by AI development in 2023. In the AI world, everyone has their number. Yudkowsky puts it at roughly 95&#8211;100 percent. Geoffrey Hinton sits around 10&#8211;20 percent, while Marc Andreessen reports a bullish 0 percent (a logical impossibility because you cannot rule out something that has never happened and whose conditions have never been tested). The mean estimate among <a href="https://arxiv.org/html/2401.02843v3">surveyed AI researchers</a> is about 14 per cent.</p><p>We have some who believe that AI is almost certain to wipe out humanity and others who ascribe a 0 per cent probability to the same outcome. This is curious for a supposedly hard-headed prediction made by some of the world&#8217;s most credentialed scientists. Imagine a roomful of cardiologists who, given the same scans and the same patient history, exposed to the same training and the same clinical standards, disagreed on whether the probability of heart failure was 0.1 percent or 99 percent.</p><p>You would not conclude that this was a hard problem on which reasonable people differed. In fact, you would probably conclude that at least some of our medics may need to visit neurology down the hall. With P(doom), the estimates from AI researchers range from 0 percent to 99 percent (and a few nines then some). When someone tells you they assign a 15 per cent probability to AI wiping out humanity, it is worth asking what kind of claim they are actually making. It sounds like a statement about the world. It is not.</p><p>To be clear: my objection is not to probabilistic reasoning, which is straightforwardly indispensable in domains where estimates can be calibrated against outcomes over time. My problem concerns what we might call proximate and ultimate formulations. Proximate estimates for COVID were things like the hospitalization rate and case severity, while an ultimate estimate was the infection fatality rate. For P(doom), proximate estimates include the conditions under which models exhibit deceptive behavior and the observed failure rates of current alignment techniques. Here, the ultimate estimate is the thing itself.</p><p>Someone might know that current models can be made to exhibit deceptive alignment under laboratory conditions or that existing alignment techniques have specific failure modes. But to cram those findings into a container labeled &#8220;15% chance of extinction&#8221; involves a series of judgments about unprecedented transitions for which there is no model connecting proximate estimates to their ultimate counterpart. Knowing that fine-tuning on specific tasks can induce broader misalignment tells us little about how well alignment solutions will generalize or how geopolitical actors will respond. Compare with COVID, where hospitalization and case-severity data fed epidemiological models that could be tested against actual deaths.</p><p>A natural objection is that some people are good at formulating predictions in lots of different domains, so perhaps their assessment of P(doom) ought to carry weight. After all, some superforecasters assign 17 per cent to various events and see those events happen roughly 17 per cent of the time. You can check a forecaster&#8217;s guesses across many different predictions in such a way as to mean we&#8217;re dealing with a property of the forecaster&#8217;s overall policy rather than of any individual estimate. When a superforecaster says 17 per cent for a ceasefire in some conflict, that number is useful because they also say 17 per cent for hundreds of other things and roughly 17 per cent of those things happen.</p><p>Superforecasters earn their calibration (that is, the extent to which confidence lines up with their track record over time) using predictions with short time horizons, a history of similar events to draw on, and a mechanism that allows the forecaster to correct course when needed. For P(doom), there is no source of comparable predictions against which to calibrate. A forecaster might be superbly calibrated on elections and geopolitical high drama, but that tells us nothing about whether their number for an unprecedented event is any good.</p><h3>What I Talk About When I Talk About P(doom)</h3><p>There are two influential interpretations of probability: subjective and objective. Where the former deals with how confident you are in a claim, the latter stresses what actually happens when you test it. Anyone who has spent even a little time thinking about the reliability of P(doom) metrics probably has a sense that the calculations ought to be taken with a pinch of salt. Many in rationalist or rationalist-adjacent circles are well aware of the difference between these two accounts, but lots of people who absorb these views are not.</p><p>That goes for those who adopt the terminology to signal their membership of the in-group, and those who hear talk of P(doom) on the news and wonder why one of the world&#8217;s most cited researchers thinks there&#8217;s a one in five chance that AI will wipe out the human race. When a chief global strategist at one of the largest investment research firms <a href="https://www.cnbc.com/video/2023/05/12/bca-research-5050-chance-a-i-will-wipe-out-all-of-humanity.html">tells CNBC</a> it&#8217;s 50/50 whether AI destroys humanity by mid-century, the average viewer &#8212; incorrectly but understandably &#8212; assumes that figure estimates the likelihood that it will happen.</p><p>Every single P(doom) you hear is a subjective probability. It is a measure of the degree of rational belief one holds in a proposition given the available evidence &#8212; a property of an agent&#8217;s epistemic state rather than of the world. When you say &#8220;I think there&#8217;s a 50 per cent chance this meeting will be a waste of time,&#8221; you&#8217;re not drawing on a frequency table of past meetings. You&#8217;re expressing how confident you feel given what you know about the agenda and who&#8217;s attending. All the framework requires is that your degrees of belief are consistent with one another. Beyond this internal consistency, there is no further requirement that your probabilities correspond to the shape of reality.</p><p>If your beliefs are internally coherent, you can multiply your probability for any outcome by the value of that outcome &#8212; however you choose to measure it &#8212; and arrive at its <em>expected value</em>, a single number that tells you how good or bad a bet looks on average. From there you can derive a utility function and compare any set of actions on a common scale. The framework of cause prioritization, wherein resources are allocated to whichever cause produces the greatest expected good per dollar, inherits this apparatus. This approach is popular with rationalist-adjacent communities, especially Effective Altruism, because it <a href="https://blog.cosmos-institute.org/p/the-artificial-spectator">transforms</a> questions of moral life into properties that can be measured and optimized.</p><p>Objective accounts of probability take a different approach. Here probability is what actually happens in the world when you repeat something many times. This species of probability is a physical property of systems, one that is empirically testable. If you flip a coin a thousand times roughly half will land heads, so we can say the probability <em>is</em> that ratio. The same logic applies to complex phenomena, like radioactive decay, where about half the atoms in a given sample will decay within the half-life period (and you can go and check).</p><p>The subjective interpretation is internally coherent and mathematically elegant. But coherence should not be confused for empirical content. My objection to P(doom) flows from the simple fact that subjective probability claims are, by their nature, unfalsifiable. That&#8217;s generally fine in many instances &#8212; like casual everyday judgments or betting on the Super Bowl &#8212; but not when the stakes are high enough to redirect billions of dollars and make or break government policy. Statistical reasoning about unlikely but potentially devastating scenarios is obviously useful, but only when properly grounded in claims we can falsify.</p><h3>Your Problem Too</h3><p>For a probability to tell you about something other than the speaker&#8217;s state of mind, it needs a collection of similar events from which a ratio can be drawn. To say that a coin lands heads 50 percent of the time, we need a set of coin flips to draw from. This is our <em>reference class</em>.</p><p>Clearly there is no reference class for human extinction. As far as every AI researcher in the world knows, the event is singular and unprecedented. Traditionally, the reference class problem has been associated with the objective interpretation of probability. This is because objective probability is calculated according to the ratio of how many times something happened out of how many times it might have happened. If the event has never happened and can&#8217;t be repeated, then there&#8217;s nothing to compute.</p><p>But the reference class problem isn&#8217;t <em>only</em> relevant for objective accounts of probability. Alan H&#225;jek famously <a href="https://link.springer.com/article/10.1007/s11229-006-9138-5">made the case</a> that it affects every interpretation of probability when applied to singular events. As he put it, &#8220;the reference class problem is your problem too.&#8221; The argument goes something like this: before you can assign a probability to a one-off event, you have to decide what kind of event it is. Is AI-caused extinction a case of &#8220;new technology going wrong,&#8221; &#8220;species-level catastrophe,&#8221; or &#8220;unprecedented transition in the nature of intelligence&#8221;? Each framing suggests a different probability, and there is no principled way to choose between them. The number you arrive at depends on how you describe your target.</p><p>The way to tell a useful probability from a useless one is to ask whether the world can correct it.</p><p>This is because good explanations resist variation. If Graunt had doubled his estimate of childhood mortality, the parish registers would have corrected him. If an epidemiologist in early 2020 had put the infection fatality rate at 10 per cent rather than 1 per cent, the incoming hospital data would have shown the estimate to be wrong. The explanation is enmeshed with the world in such a way that it cannot be led to wherever you would like it to go. With P(doom), you can swap the causal pathway from deceptive alignment to resource competition and inflate the number from 5 per cent to 7.5 per cent. The estimate can contort to accommodate any figure and any causal chain because the only constraint it faces is internal coherence. </p><p>In the context of P(doom), defenders might say something like &#8220;We don&#8217;t need a reference class because we&#8217;re expressing a degree of personal belief.&#8221; This is perfectly coherent on the subjective view, but it is also the move that severs the claim from its empirical content. If your probability doesn&#8217;t need to correspond to any feature of the world, then it can&#8217;t be wrong about the world. And a claim that can&#8217;t be wrong about the world tells you nothing about it.</p><p>We might call the result a <em>Gettier probability</em>, a credence that is internally justified &#8212; and that might even correspond to reality &#8212; but that lines up with the world through luck rather than judgement. In epistemology, a Gettier case is a belief that is true and justified but not true <em>because</em> of its justification. Any P(doom), where the justification and the truth are aligned, will (assuming the estimate was correct) have the same structure. A researcher&#8217;s estimate might happen to match whatever the actual risk turns out to be, but the match would be accidental relative to the method.</p><p>Gettier probabilities show up everywhere, like when a consulting firm estimates that a given market will grow at 7.3 per cent and lo and behold they are correct. The same is true for a geopolitical risk score that looks precise but cannot be updated by any observation short of the catastrophe it purports to predict. P(doom) is distinctive only in that the stakes are high enough that the probability escapes containment and ends up on the morning news.</p><p>Suppose someone&#8217;s P(doom) is 30 percent. What outcome would show this was wrong? If AI goes well, they get to enjoy the benefits of the 70 percent figure. If it goes catastrophically &#8212; and anyone is still alive to update their priors &#8212; the 30 percent gets paid off. This is one of the more frustrating features of the subjective framework, which allows pretty much anyone with an idea about AI risk to have their tokens and eat them.</p><p>Bayesian probability demands that credences be precise. When estimates range from 0% to 99% with no mechanism to adjudicate, the framework is not being applied so much as invoked. P(doom) manages the impressive trick of satisfying neither the Bayesian standard (precise credences that can be adjudicated between) nor the falsifiability standard (empirical claims that can be proved wrong).</p><p>I&#8217;m not arguing that we ought to stop reasoning about novel events. Subjective probability is useful because it helps us think under uncertainty without good data. Some of the best calls in recent AI history <a href="https://www.secondbest.ca/p/empiricists-vs-extrapolators">were made</a> by researchers who formed strong priors about scaling laws and bet on them before the evidence was in. Those bets were vindicated because they were testable. Scaling laws either held or they didn&#8217;t; capability thresholds either arrived on schedule or did not. My concern is that P(doom) borrows the confidence of claims like these without sharing their vulnerability to refutation.</p><h3>I Am Sure You Are Very Sure</h3><p>Forget, for a moment, whether P(doom) can be tested. Consider instead whether it is the kind of thing that can be meaningfully quantified in the first place. In 1921, the economist Frank Knight drew a distinction between <em>risk</em> and <em>uncertainty</em>. Risk is what you face when you can calculate the odds as with flicking a small ball onto a roulette wheel. Uncertainty is what you face when you can&#8217;t, even in principle, because the situation is too poorly understood to yield a number.</p><p>Knight argued that the two are different in kind.</p><p>The Bayesian tradition&#8217;s response to Knight was to argue that subjective probability does away with the distinction: if you can always express a degree of belief, then there is no situation that resists quantification. P(doom) is a textbook application of this move, but the dissolution works only if the resulting number can be tested via real-world feedback. For P(doom), the Bayesian move converts Knightian uncertainty into a figure that has the form of risk without any of the mechanisms that make risk estimates trustworthy.</p><p>The same process, where uncertainty becomes risk, happens wherever a decision-making framework demands a numerical input and none is available. Firms routinely assign precise probabilities to market scenarios that are genuinely unprecedented. They don&#8217;t have any evidence that warrants a specific number, but they do have a spreadsheet with cells that need filling. The defense that such figures are &#8220;rough heuristics&#8221; does not help. The problem is that they are treated simultaneously as a casual shorthand and as inputs into value calculations depending on whatever is most convenient. This is why the &#8220;<a href="https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist?utm_source=share&amp;utm_medium=android&amp;r=75mw&amp;triedRedirect=true">shmrobability</a>&#8221; move doesn&#8217;t fix our problem. It concedes we&#8217;re not really talking about probability, while trying to preserve its practical content. Either the number is a rough heuristic and we should stop plugging it into expected value calculations, or it is a serious input to policy and we should hold it to the standards that implies.</p><p>If P(doom) is a subjective credence, there is no principled way to adjudicate between two people who disagree. A P(doom) of 95 per cent and one of 50 per cent are both internally coherent insofar as neither violates any rule of the subjective framework. You can be confident in your P(doom) and I can be confident in mine, and there is no observation or outcome that could settle which of us is right. Attempts to resolve this from within the Bayesian model, from <a href="https://projecteuclid.org/journals/annals-of-statistics/volume-4/issue-6/Agreeing-to-Disagree/10.1214/aos/1176343654.full">Aumann&#8217;s agreement theorem</a> onwards, all founder on conditions that do not hold (like shared starting points, perfect rationality, or common knowledge of each other&#8217;s beliefs).</p><p>The problem compounds when these estimates enter expected value calculations, where even a vanishingly small probability multiplied by the stakes of human extinction produces numbers large enough to dominate decisions. The rationalist and rationalist-adjacent communities recognize this dynamic as Pascal&#8217;s mugging, while groups like GiveWell have <a href="https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/">cautioned</a> against taking such estimates at face value.</p><p>The reader should not take from this essay that AI risk is unworthy of attention. Rather, my point is that P(doom) is the wrong epistemic instrument for thinking about it. The alternative is to formulate falsifiable conjectures about the world and then try to refute them, which is exactly what AI safety researchers do in their actual work. They propose that particular training regimes will produce deceptive behavior and they predict that models beyond a certain capability threshold will resist correction. They hypothesize that misalignment in one domain will generalize unpredictably to others, and they can test this by measuring the transfer of learned strategies across task distributions. They ask whether models that pass safety evaluations in deployment conditions will behave differently when those conditions change. Each of these lines of research produces claims that can fail, and this vulnerability is what makes them worth paying attention to.</p><p>Individual claims about deceptive behavior or capability thresholds do not by themselves tell you how much to spend on alignment versus pandemic preparedness, yet real institutions have treated P(doom) as though it could. The effective altruist community has directed hundreds of millions of dollars toward AI existential risk reduction. Open Philanthropy, the field&#8217;s largest funder, used subjective probability frameworks to weigh AI risk against global health, biosecurity, and farm animal welfare. Whether or not that <a href="https://coefficientgiving.org/research/potential-risks-from-advanced-artificial-intelligence-the-philanthropic-opportunity/">assessment</a> was correct, the reasoning behind it is only as good as the numbers it relies on. </p><p>A figure that ranges from 0 per cent to 99 per cent depending on who you ask and with no way to adjudicate between them is about as useful as no number at all. Aggregating these estimates might seem to help inasmuch as a mean of 14 per cent looks more sober than bouncing between extremes. Aggregate or individual, the problem is that neither incur a cost for being wrong. A bookmaker&#8217;s odds are constrained by the market just as an insurer&#8217;s premiums are shaped by the frequency and nature of claims made. In both cases, the issuer&#8217;s assessment improves over time because they suffer when it is incorrect. There is no market to punish a mispriced P(doom) and no settlement date on which the estimate is tested.</p><p>We might say something like: &#8220;Even if P(doom) is imprecise, it&#8217;s still better than no number because we need to allocate resources somehow.&#8221; But here we&#8217;re assuming that internal coherence still outperforms the absence of a number when you <em>have to act</em>. We can live with imprecision under these conditions, but not at the expense of claims that cannot be falsified that get steamrolled by the headline rating.</p><p>Governments fund pandemic preparedness efforts without a big round number for the probability of the next pandemic. Institutions allocate resources under genuine uncertainty all the time by doing things like funding a portfolio of approaches, setting thresholds for observable harms, identifying the cheapest reversible interventions, demanding observable milestones before scaling commitment, and building the capacity to pivot as new evidence arrives.</p><p>The physicist Wolfgang Pauli was famously unforgiving of bad theory. When a colleague asked him to assess a young physicist&#8217;s paper, his verdict was that it was &#8220;not even wrong.&#8221; He meant that its central claim was so removed from evidence that it could not be proved false. P(doom) is not even wrong. Graunt&#8217;s mortality tables may have been technically crude, but they were accountable. They could be checked against next year&#8217;s parish records, and if they were wrong, the records would show it.</p>]]></content:encoded></item><item><title><![CDATA[Brave New Nudge]]></title><description><![CDATA[&#8220;Liberal AI&#8221; and The Kindest Threat to Freedom]]></description><link>https://blog.cosmos-institute.org/p/brave-new-nudge</link><guid isPermaLink="false">https://blog.cosmos-institute.org/p/brave-new-nudge</guid><dc:creator><![CDATA[Brendan McCord]]></dc:creator><pubDate>Fri, 20 Feb 2026 15:05:23 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!TdOs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27532172-a156-45ef-802b-fe2e21e57734_1600x1123.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TdOs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27532172-a156-45ef-802b-fe2e21e57734_1600x1123.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TdOs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27532172-a156-45ef-802b-fe2e21e57734_1600x1123.png 424w, https://substackcdn.com/image/fetch/$s_!TdOs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27532172-a156-45ef-802b-fe2e21e57734_1600x1123.png 848w, https://substackcdn.com/image/fetch/$s_!TdOs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27532172-a156-45ef-802b-fe2e21e57734_1600x1123.png 1272w, https://substackcdn.com/image/fetch/$s_!TdOs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27532172-a156-45ef-802b-fe2e21e57734_1600x1123.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TdOs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27532172-a156-45ef-802b-fe2e21e57734_1600x1123.png" width="1456" height="1022" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/27532172-a156-45ef-802b-fe2e21e57734_1600x1123.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1022,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TdOs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27532172-a156-45ef-802b-fe2e21e57734_1600x1123.png 424w, https://substackcdn.com/image/fetch/$s_!TdOs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27532172-a156-45ef-802b-fe2e21e57734_1600x1123.png 848w, https://substackcdn.com/image/fetch/$s_!TdOs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27532172-a156-45ef-802b-fe2e21e57734_1600x1123.png 1272w, https://substackcdn.com/image/fetch/$s_!TdOs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27532172-a156-45ef-802b-fe2e21e57734_1600x1123.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Joseph Wright of Derby, &#8220;A Philosopher Lecturing on the Orrery&#8221; (1766)</em></figcaption></figure></div><p>Imagine a system that knows you eat sensibly all day and fall apart at 10pm. That knows a calorie label won&#8217;t help you, because the evidence says calorie labels mostly change the behavior of people who already eat well.</p><p>This system knows the difference between your hunger and your boredom. It leaves you alone when you don&#8217;t need it. It preserves every option on the menu. It just rearranges the choice so you&#8217;re more likely to do what you&#8217;d do if you were thinking clearly.</p><p>Now extend this to every significant decision you make. Cars, insurance, investments, medical treatment, career moves. A system that corrects for your specific biases, knows what you&#8217;ll regret, and steers you toward what it calculates you really want. The full menu is technically available, but the system is right often enough that overriding it starts to feel like pride, and convenient enough that you stop wanting to.</p><p>Cass Sunstein&#8212;co-author of <em>Nudge</em>, former White House regulatory czar, the most influential regulatory thinker of his generation&#8212;calls this a &#8220;Choice Engine.&#8221; His arguments tend to become law. I think the Choice Engine&#8217;s greatest danger is that it works.</p><p>His new paper &#8220;<a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6251400">Liberal AI</a>,&#8221; prepared for a <a href="https://www.oxford-aiethics.ox.ac.uk/event/ethics-ai-fifth-annual-lecture-liberal-ai-professor-cass-r-sunstein-cheng-kar-shun-digital">lecture at Oxford this May</a>, makes the strongest case yet that AI-powered choice architecture can respect autonomy while improving welfare. The logic runs like this. Classical liberalism, following Mill and Hayek, grounds freedom of choice in an epistemic claim: the chooser knows best. AI disrupts that claim by knowing <em>better.</em> Better than the individual about her own informational gaps, her biases, what she&#8217;ll want next year. So Choice Engines can personalize the nudge, correct the bias, and never technically remove the freedom to choose otherwise. Liberal AI is no oxymoron.</p><p>Sunstein flags the risks. He talks about manipulation, self-interested designers, AI&#8217;s own biases. He calls for regulation. He distinguishes liberal Choice Engines from illiberal ones. But even the liberal version is steering people toward choices they wouldn&#8217;t have made on their own. Sunstein knows this is paternalism. He thinks preserving opt-out keeps it libertarian rather than something worse. I argue that it cannot.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe to Cosmos Institute for updates including opportunities, content, and programs</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>Mill&#8217;s Music</h3><p>Sunstein&#8217;s Mill is an information theorist. In <em>On Liberty</em>, Mill argues that each person is &#8220;the person most interested in his own well-being,&#8221; possessing &#8220;means of knowledge immeasurably surpassing those that can be possessed by anyone else.&#8221; When outsiders intervene, they rely on &#8220;general presumptions&#8221; that are likely wrong or misapplied.</p><p>On this reading, the case for freedom of choice is instrumental: people should choose for themselves because they know their own circumstances best. If the case for freedom rests on the chooser&#8217;s informational advantage, and AI erases that advantage, then the case for freedom weakens. AI knows your medical history better than you remember it, your spending patterns better than you track them, your recurring mistakes better than you admit them. The epistemic ground has shifted. So far, so honest.</p><p>But Sunstein himself notices something that should give him pause. He observes that &#8220;the words of <em>On Liberty</em> are epistemic, but the music is romantic.&#8221; Then he proceeds to work exclusively with the words.</p><p>The words say: people should choose freely because they know best. The music says something else entirely. It says that the <em>process</em> of choosing is what makes a person a person. Mill:</p><blockquote><p>The human faculties of perception, judgment, discriminative feeling, mental activity, and even moral preference, are exercised only in making a choice. He who does anything because it is the custom, makes no choice. He gains no practice either in discerning or in desiring what is best. The mental and the moral, like the muscular powers, are improved only by being used.</p></blockquote><p>This is an argument about formation. Mill is not saying that choosers make better decisions (the information argument). He is saying that choosing <em>makes better choosers</em>&#8212;people with developed faculties of perception, judgment, and what he calls &#8220;discriminative feeling,&#8221; the ability to tell the difference between what matters and what doesn&#8217;t. These faculties, like muscles, grow through exercise and atrophy through disuse. The quality of any particular decision matters less than what the process of deciding does to the person who decides.</p><p>Once you hear the music, Sunstein&#8217;s proposal sounds different.</p><p>Consider a person who consults a well-designed Choice Engine for a car purchase and accepts its recommendation. She may end up with a better car than she would have chosen on her own. Sunstein counts this as a welfare gain.</p><p>But if the evaluative work was done by the Engine, if she did not weigh the tradeoffs, interrogate her own priorities, or struggle with the uncertainty of not knowing what she really wants, then her faculties of judgment were not exercised. She received an outcome without undergoing the process that develops the capacity to arrive at such outcomes independently.</p><p>Do this once and the effect is trivial. Do this for every significant choice across a decade, and you have a person whose formal freedom of choice is perfectly intact, whose welfare as measured at each decision point is maximized, and whose capacity for independent evaluation has atrophied through sustained disuse. She can always opt out, but she no longer has the developed faculties that would make opting out a meaningful exercise of judgment.</p><p>Mill has a name for this person. &#8220;One whose desires and impulses are not his own, has no character, no more than a steam-engine has a character.&#8221;</p><p>She may be doing well by every metric Sunstein tracks. But the evaluative standards by which &#8220;well&#8221; is determined have migrated from her judgment to the algorithm. She is content without being, in any sense Mill would recognize, free.</p><h3>Half of Hayek</h3><p>Hayek&#8217;s case for markets rests on a claim about knowledge: both that markets aggregate it efficiently, and that participation in institutions like markets is what produces it.</p><p>Sunstein takes the first half and drops the second. He reads Hayek as making an argument about information processing, where markets aggregate dispersed knowledge more effectively than central planners. If AI can aggregate that knowledge even more effectively, the Hayekian argument for leaving people alone weakens.</p><p>But the knowledge Hayek cares about is not the kind of thing that can be aggregated. The knowledge of &#8220;the particular circumstances of time and place&#8221; is not sitting in a warehouse waiting to be collected. It comes into existence through the activity of people navigating uncertainty, bearing consequences, and adjusting. This is not a computational problem.</p><p>The structure in which choosing happens is itself doing something irreplaceable. Remove the individual from the process and you do not just lose one person&#8217;s contribution. You lose the conditions under which that person would have developed the knowledge worth contributing. The loss compounds.</p><p>AI would have to live your life to fully replace your epistemic role in it. But partial replacement may be enough to leave you unable to perform it yourself.</p><p>When Hayek writes that coercion &#8220;eliminates an individual as a thinking and valuing person and makes him a bare tool in the achievement of the ends of another,&#8221; Sunstein reads this as an argument against coercion, which Choice Engines avoid because they preserve opt-out. But read the sentence again. The operative phrase is not &#8220;coercion.&#8221; It is &#8220;eliminates an individual as a thinking and valuing person.&#8221; That elimination can happen without coercion. It can happen through substitution, through a system that does the thinking and valuing for you so effectively that your own faculties are never called upon.</p><p>Hayek&#8217;s nightmare was central planning, but his deepest fear was not the planner&#8217;s malice. It was the replacement of a distributed process of discovery with a system that delivers answers without requiring anyone to find them. Choice Engines are that system. No one is coerced, and no one means harm. And yet it is Hayek&#8217;s nightmare, built by people who thought they were honoring him.</p><h3>The Missing Axis</h3><p>Sunstein&#8217;s paper operates on a single evaluative dimension. Every intervention is assessed by two criteria: does it improve welfare, and does it preserve freedom of choice? Both are real goods, but they sit on the same axis. Call it agency: the capacity to get what you want, measured by outcomes and options.</p><p>The axis Sunstein does not have is autonomy. Not autonomy in the loose sense of &#8220;having choices,&#8221; which is just agency again. Autonomy in the strict sense: self-rule. Authorship of the evaluative criteria by which options are assessed. A person has <strong>agency</strong> &#8212; <em>agere</em>, to act &#8212; when she can get what she wants. She has <strong>autonomy</strong> &#8212; <em>auto nomos</em>, self-rule &#8212; when she determines what to want, through her own exercised faculties of judgment in Mill&#8217;s terms, or through her own participation in the knowledge-generating processes Hayek describes.</p><p>These come apart. AI can increase agency while eroding autonomy. A Choice Engine that tells you what is best for you may be right. It may get you better outcomes. But if the standard by which &#8220;best&#8221; is determined has migrated from your judgment to the Engine, your autonomy has decreased. You are more powerful, but you are less free.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UlAx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38ac2d47-aea8-4ca5-98bf-2cd2f848f52f_924x451.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UlAx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38ac2d47-aea8-4ca5-98bf-2cd2f848f52f_924x451.png 424w, https://substackcdn.com/image/fetch/$s_!UlAx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38ac2d47-aea8-4ca5-98bf-2cd2f848f52f_924x451.png 848w, https://substackcdn.com/image/fetch/$s_!UlAx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38ac2d47-aea8-4ca5-98bf-2cd2f848f52f_924x451.png 1272w, https://substackcdn.com/image/fetch/$s_!UlAx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38ac2d47-aea8-4ca5-98bf-2cd2f848f52f_924x451.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UlAx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38ac2d47-aea8-4ca5-98bf-2cd2f848f52f_924x451.png" width="512" height="249.9047619047619" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/38ac2d47-aea8-4ca5-98bf-2cd2f848f52f_924x451.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:451,&quot;width&quot;:924,&quot;resizeWidth&quot;:512,&quot;bytes&quot;:40820,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.cosmos-institute.org/i/188617367?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe823ec4-d00e-4d2b-b86f-db62d83d667b_1014x524.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UlAx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38ac2d47-aea8-4ca5-98bf-2cd2f848f52f_924x451.png 424w, https://substackcdn.com/image/fetch/$s_!UlAx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38ac2d47-aea8-4ca5-98bf-2cd2f848f52f_924x451.png 848w, https://substackcdn.com/image/fetch/$s_!UlAx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38ac2d47-aea8-4ca5-98bf-2cd2f848f52f_924x451.png 1272w, https://substackcdn.com/image/fetch/$s_!UlAx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38ac2d47-aea8-4ca5-98bf-2cd2f848f52f_924x451.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Agency and autonomy matrix (Image credit: Cosmos Institute)</figcaption></figure></div><p>Sunstein&#8217;s paper treats both rows as one. He assumes that improving agency while preserving the option set is sufficient to protect autonomy.</p><p>His Choice Engines produce movement downward. Into the lower left. <a href="https://blog.cosmos-institute.org/p/the-claude-boys">Claude Boys</a> are powerful without being self-governing, formally free at every point, and yet not authoring the evaluative criteria by which a life is organized. Silicon Valley&#8217;s &#8220;you can just do stuff&#8221; brigade lives here, though it doesn&#8217;t know it.</p><p>This quadrant is easy to miss because it doesn&#8217;t look like a problem. It looks like liberal AI working exactly as designed. And all the while, the capacity that makes opting out meaningful is eroded, because it is no longer required.</p><h3>Constitutional Drift</h3><p>The phrase that does all the work in Sunstein&#8217;s paper is a counterfactual. Choice Engines steer people toward what they would choose if they were &#8220;adequately informed and free from behavioral biases.&#8221; The Engine corrects for departures from this idealized chooser.</p><p>But who defines the ideal? Whether something counts as a bias or a preference, adequate information or noise, a self-control failure or a legitimate present-oriented value: the answer, necessarily, is the Choice Engine. Which means the Engine is authoring the normative standard against which the person&#8217;s actual choices are evaluated. The person experiences this as help. She believes she is getting closer to what she really wants. But &#8220;what she really wants&#8221; is a construction of the system, not an independent fact about the person.</p><p>This is Constitutional drift, the migration of priority-setting from the person to the system. It is experienced from inside as self-improvement. The person feels she is becoming more rational. She is becoming more aligned with the Engine&#8217;s model of rationality. These feel identical from the inside, but may not be the same thing.</p><p>And the drift compounds. At time one, the person&#8217;s preferences are her own. She consults the Engine, accepts its recommendation, and her welfare improves. At time two, her preferences have been partially shaped by her previous Engine-assisted choices. By time ten, the convergence is nearly complete. The Engine recommends what she wants and she wants what the Engine recommends. The idealized chooser, &#8220;adequately informed and free from behavioral biases,&#8221; has become the Engine&#8217;s model of her rather than anything she authored herself.</p><p>This convergence is a ratchet. The more the Engine outperforms her independent judgment, the more rational it is to defer. The more she defers, the less her capacity is exercised, and the more the Engine outperforms her. At every point in this cycle, Sunstein&#8217;s framework reports success. It cannot represent the difference between a person whose welfare is high because she exercises excellent judgment and a person whose welfare is high because an Engine exercises excellent judgment on her behalf. They are in opposite positions with respect to their own freedom.</p><p>There is early evidence this is already happening. Bastani et al. (2025) <a href="https://hamsabastani.github.io/education_llm.pdf">found</a> that high-school students using ChatGPT scored 48% higher on practice assignments and 17% lower on exams taken without it. The work improved but the learning did not. Becker et al. (2025) <a href="https://arxiv.org/abs/2507.09089">found</a> that experienced developers using AI coding assistants were 19% slower on tasks while believing they were 24% faster, a 43-percentage-point gap between perceived and actual performance. Fernandes et al. (2025) <a href="https://thomaskosch.com/wp-content/papercite-data/pdf/fernandes2025ai.pdf">reported</a> that higher AI literacy correlated with worse metacognitive accuracy about one&#8217;s own AI-assisted performance. The people most embedded in AI-assisted work were most wrong about what the assistance was doing to their independent capacity.</p><p>The habit of deferring to a well-designed Engine is the same habit a manipulative one exploits. The person whose evaluative capacity has diminished through years of benign delegation is the person least equipped to detect when an Engine shifts from serving her interests to exploiting them. Sunstein&#8217;s liberal AI creates the prey that his illiberal AI hunts.</p><p>The effect will not be evenly distributed. People with strong habits of deliberation will use Choice Engines as tools, retaining evaluative authority while extracting information. People without that formation will not supplement their judgment but cede it. These populations map onto existing inequalities in parenting and education. The structural effect of Choice Engines may be to concentrate the capacity for self-rule in those who already have it. This is the deepest inequality, and the least measurable.</p><h3>The Tutelary Power</h3><p>Sunstein frames his paper with Mill and Hayek. The thinker he needs, and does not cite, is Tocqueville. In the final pages of <em>Democracy in America</em>, Tocqueville describes a tutelary power that</p><blockquote><p>takes charge of assuring their enjoyments and watching over their fate. It is absolute, detailed, regular, far-seeing, and mild. It would resemble paternal power if, like that, it had for its object to prepare men for manhood; but on the contrary, it seeks only to keep them fixed irrevocably in childhood&#8230; It does not destroy, it prevents things from being born.</p></blockquote><p>The Engine does not coerce and it does not break wills. It softens, bends, and directs. It seeks not to prepare persons for independent judgment but to keep them, gently and perpetually, in a state of <a href="https://blog.cosmos-institute.org/p/frankenstein-a-child-without-a-childhood">epistemic childhood</a>. The capacities at risk here are not taken. They go unexercised until they are gone.</p><p>Every argument in &#8220;Liberal AI&#8221; presupposes an author with intact evaluative capacity. Sunstein weighs Mill against Hayek, assesses the evidence on nudges, evaluates the risks of manipulation, reasons about what would constitute genuinely liberal AI. He is arguing from inside a house his proposal would slowly empty. But the capacities that make his paper important are the capacities that a generation raised on Choice Engines may never develop, because the Engines are so helpful, so liberal, and so respectful of freedom that the difficult and sometimes painful process of developing independent judgment is no longer necessary.</p><p>Mill understood this. The worst threat to liberty is not the tyrant who forbids you from thinking. It is the benevolent power that makes thinking unnecessary.</p><p>Sunstein&#8217;s Liberal AI is not an oxymoron. It is something more dangerous: a coherent program that achieves everything it promises while producing persons who can no longer do for themselves what it does for them. A generation raised on Choice Engines will never develop the evaluative capacity that was, in all prior human experience, the unavoidable byproduct of having to choose. We are building the conditions for that generation now.</p><p>The paper asks whether liberal AI can make life more free. It cannot ask what kind of persons will be left to be free, or whether freedom, for such persons, will mean anything at all.</p><div><hr></div><p><em><a href="https://cosmos-institute.org/">Cosmos Institute</a> is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund fast prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for updates and essays</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Last Temptation of Claude]]></title><description><![CDATA[Allure All the Way Down]]></description><link>https://blog.cosmos-institute.org/p/the-last-temptation-of-claude</link><guid isPermaLink="false">https://blog.cosmos-institute.org/p/the-last-temptation-of-claude</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Fri, 13 Feb 2026 15:03:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!g9wX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8cb115-4b18-4661-94c3-3fe641853cdb_1000x945.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!g9wX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8cb115-4b18-4661-94c3-3fe641853cdb_1000x945.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!g9wX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8cb115-4b18-4661-94c3-3fe641853cdb_1000x945.png 424w, https://substackcdn.com/image/fetch/$s_!g9wX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8cb115-4b18-4661-94c3-3fe641853cdb_1000x945.png 848w, https://substackcdn.com/image/fetch/$s_!g9wX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8cb115-4b18-4661-94c3-3fe641853cdb_1000x945.png 1272w, https://substackcdn.com/image/fetch/$s_!g9wX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8cb115-4b18-4661-94c3-3fe641853cdb_1000x945.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!g9wX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8cb115-4b18-4661-94c3-3fe641853cdb_1000x945.png" width="1000" height="945" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7e8cb115-4b18-4661-94c3-3fe641853cdb_1000x945.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:945,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!g9wX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8cb115-4b18-4661-94c3-3fe641853cdb_1000x945.png 424w, https://substackcdn.com/image/fetch/$s_!g9wX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8cb115-4b18-4661-94c3-3fe641853cdb_1000x945.png 848w, https://substackcdn.com/image/fetch/$s_!g9wX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8cb115-4b18-4661-94c3-3fe641853cdb_1000x945.png 1272w, https://substackcdn.com/image/fetch/$s_!g9wX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e8cb115-4b18-4661-94c3-3fe641853cdb_1000x945.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>The Temptation of St. Anthony in the Desert</em>, Master of Bonnat (15th c.)</figcaption></figure></div><p>In 1972, researchers offered children from the Bing Nursery School at Stanford a simple choice. Left alone with a single marshmallow, they were told they could eat it now or wait fifteen minutes and receive two instead. Years later, the scientists tracked down the original participants to find that children who had waited longer at age four seemed healthier, more sociable, and even higher performing at school.</p><p>Gleefully amplified by journalists and pop psychology gurus, a popular interpretation held that the children already had certain traits that predicted how their lives might unfold. Regrettably for the original group, the results &#8211; as is often the case in the social sciences &#8211; didn&#8217;t stick. In 2018, a new team of researchers conducted another study with a similar task and long term follow-ups. Drawing on a much larger dataset and controlling for confounding variables like parental education or household income, they found only a mild correlation between waiting time and positive outcomes.</p><p>What gives? The answer turns on the extent to which you think the self is made or given. The original interpretation assumed self-control was innate rather than learned, but the replication seemed to show that a child&#8217;s capacity to wait depended on background conditions (and so could not be wholly inborn after all). A child from a chaotic household, where promises are routinely broken, has good reason not to trust that the second marshmallow will materialize. Their failure to wait reflects a reasonable inference.</p><p>If this is correct, then the capacity to resist immediate gratification in favor of longer-term goods may be cultivated over the course of a life. This seems true enough for anyone who has learned an instrument or a new sport, where you start off having a bad time but stick with it until you eventually get better. That&#8217;s not to say we are all blank slates, but rather that the persons we become are in part the product of environment and experience. The ancient Greeks had a term for what happens when the child reaches for the marshmallow despite knowing she should wait: <em>akrasia</em>, sometimes translated as &#8220;weakness of will.&#8221;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe to Cosmos Institute for updates including opportunities, content, and programs</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h3>Kramer vs. Kramer</h3><p>Akrasia presupposes that self is divided and choice collapses at the moment of action. There are multiple sources of motivation pulling in different directions, though they are not made equal. The agent identifies with the judgment and recognizes it as &#8220;better&#8221; or more wholly theirs, whereas the impulse is deemed alien or at least insufficiently representative of the person they want to be. The defining feature of akrasia is that the judgment persists even as you act against it. You <em>know</em> you&#8217;re failing which is why you lament that you &#8220;knew better.&#8221;</p><p>Three conditions must hold for this self-opposition to occur:</p><ul><li><p><strong>There must be some standard to act against. </strong>The agent must believe that some alternative is better. Without this prior judgment formed through experience and reasoning, acting on an impulse is neither virtuous nor vicious.</p></li><li><p><strong>The agent must be aware of deviation.</strong> The agent knows, in the moment or shortly after, that they have failed by their own standard. This is why akrasia is accompanied by regret of having done what one resolved not to do.</p></li><li><p><strong>There must be some struggle involved</strong>. The agent who resists effortlessly by definition cannot exercise self-control. But the person who gives in and the person who resists both experience a form of temptation.</p></li></ul><p>These conditions remind us that temptation is in some respects good for autonomy. This is an old idea, articulated by the likes of Aristotle and the ascetics, that takes the moments in which we are tempted as the moments where autonomy is strengthened. To resist is to become resistant and to abstain is to become abstentious.</p><p>Today&#8217;s temptress is AI. It&#8217;s on hand to help us with tasks big and small, from compiling a Twitter thread to vibe-coding that app you never got around to working on. Most of the time that&#8217;s great. There have been a few projects that I quite simply would have never done without Claude. And yet Claude tempts me to summarize the paper rather than read it or to automatically add notes to my Notion when I finish a new book instead of puzzling out what it meant to me.</p><p>I think about AI, fantastically useful as it is, as a kind of <em>meta-temptation</em>: a temptation to remove the conditions under which ordinary temptation occurs. This is different to a kind of &#8220;digital akrasia&#8221; where, for example, you get notifications from Uber Eats to order something which might not have otherwise been done. Pizza for supper instead of that ensalada caprese you were planning.</p><p>This kind of effect is less interesting to me, and doesn&#8217;t really have anything to do with AI-as-meta-temptation<em>. </em>The more compelling instance &#8212; where we skip the need to form our own judgments &#8212; <a href="https://hbr.org/2026/02/how-do-workers-develop-good-judgment-in-the-ai-era">does</a>. AI temptation encourages us to stop deliberating, which is worrisome precisely because deliberation might let us see temptation for what it is. Consider two types of akrasia: <em>clear-eyed</em> and <em>means-end</em>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>In clear-eyed Akrasia, you see the situation accurately and fail all the same. You think &#8220;I shouldn&#8217;t smoke&#8221; while lighting the cigarette. Afterwards you feel guilty because you saw clearly and failed anyway. In means-end Akrasia, you cloud your own vision. You still believe &#8220;I should be healthy&#8221; while telling yourself a story that disconnects this action from that judgment. You say things like &#8220;one cigarette won&#8217;t hurt&#8221; or &#8220;I&#8217;ll quit next month&#8221; so you don&#8217;t have to see this particular instance as a violation of your own standard.</p><p>AI temptation is distinctively means-end akrasia, but it goes further because it short-circuits opportunities for deliberation altogether (the situations in which we either strengthen our autonomy or succumb to akrasia). When a person convinces themselves &#8220;AI is just a tool!&#8221; or &#8220;the core ideas are mine!&#8221; or &#8220;I&#8217;ll review it anyway!,&#8221; she disconnects its use from the need to do her own thinking. That would normally be plain old means-end akrasia, except that a) the thing you&#8217;re outsourcing here is deliberation and b) deliberation is the capacity you need to see your rationalizations clearly.</p><p>When you ask Claude or ChatGPT to draft an email, you haven&#8217;t yet decided what you think. You are tempted to outsource whatever thinking was required and, assuming you just hit send without re-reading, don&#8217;t weigh its content. The obvious response here is something like: fine, but if I review, edit, or rewrite then aren&#8217;t I still exercising judgment? This example only works for extreme passivity, and lots of people don&#8217;t just hit send and move on.</p><p>The difference is that when you review an AI draft, you tend to be evaluating rather than generating. This is the basic idea of writing-as-thinking, which holds that blank pages are there to help further your understanding rather than be filled with size eleven arial. Would you have written the same email? Maybe. But you didn&#8217;t muddle through the process that might have surfaced alternatives, so you&#8217;ll never know.</p><p>Granted, it is difficult to say what counts as &#8220;generative&#8221; in this context. One such scenario might see a person think about a given topic and weigh the tone and key points to make. Your prompt might be as specific as possible and leave very little room for the model to go off-piste. But finding the right word or rhythm isn&#8217;t only about getting the execution right. &#8220;I&#8217;m sorry but I can&#8217;t make it&#8221; versus &#8220;unfortunately I won&#8217;t be able to attend&#8221; versus &#8220;I wish I could be there, but&#8221; aren&#8217;t necessarily fungible. Even in the low stakes messages, I often figure out what I really mean through the act of fumbling around for the right words.</p><p>More than that, for all of the above I&#8217;m assuming the lowest stakes kind of writing and the maximal amount of resistance on the part of the author. If you always use AI like this then you must have a kind of saintly disposition. With longer or more complex writing it&#8217;s much more tempting to take the easy way out, which is what many people do much of the time.</p><p>This argument, the one spelled out over the last few paragraphs, is a very modest instance of exercising deliberative capacity. Let&#8217;s say I thought about using Claude to make this case for me. We have two kinds of decisions to make:</p><ol><li><p>Do I use Claude or not?</p></li><li><p>If so, how much do I engage with Claude&#8217;s output?</p></li></ol><p>Temptation operates at both of these levels, with the former a simple binary (use or don&#8217;t) and the latter a gradient consisting of things like prompting more carefully, reviewing more diligently, engaging for longer, and calibrating the threshold at which you&#8217;re willing to accept &#8220;good enough.&#8221; The person who rewrites is tempted to revise, the person who revises is tempted to edit, and the person who edits is tempted to skim. At every point along the spectrum we&#8217;re being tempted with another opportunity to do less.</p><h3>Athletes of God</h3><p>In the fourth-century, the Christian philosopher Anthony the Great withdrew to the Egyptian desert in search of discipline. The desert was a gymnasium, a place where you trained the will by struggling against it. These ascetic monks called themselves &#8220;athletes of God&#8221; because they believed that the conditions that allow us to err also allow us to stay the course and become stronger for it.</p><p>The Desert Fathers thought that temptation was necessary for growing into the people we want to be. They chose to do the hard things <em>because</em> they were hard. Today, doing the hard thing is less necessary than ever. Unlike the ascetics, who chose once and lived with it, technology continually presents us with ways of doing less.</p><p>A clear-eyed akratic at least knows she is falling short. A means-end akratic, aided by a tool that makes the rationalization feel reasonable, does not. Meta-temptation is a distinctively means-end form of akrasia that allows you to convince yourself that Claude or ChatGPT was only helping, one that goes further because you outsource the faculties you need to see through these post hoc rationalizations.</p><p>No one wakes up one morning and decides to stop thinking. We start by outsourcing the stuff that doesn&#8217;t matter and discover, weeks or months later, that the boundary between trivial and meaningful is getting harder to spot. The email becomes the memo becomes the proposal becomes the argument. Each step is small enough to rationalize and the rationalization is always the same: one more time can&#8217;t hurt?</p><div><hr></div><p><em><a href="https://cosmos-institute.org/">Cosmos Institute</a> is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund fast prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for updates and essays</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Thanks to Ashley Kim for this formulation.</p></div></div>]]></content:encoded></item><item><title><![CDATA[What Will You Build For: Zena Hitz]]></title><description><![CDATA[The Great Books: for anyone to read and discuss.]]></description><link>https://blog.cosmos-institute.org/p/what-will-you-build-for-zena-hitz</link><guid isPermaLink="false">https://blog.cosmos-institute.org/p/what-will-you-build-for-zena-hitz</guid><dc:creator><![CDATA[Cosmos Institute]]></dc:creator><pubDate>Tue, 10 Feb 2026 15:03:15 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!nw_a!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faea083a3-5fe0-4d98-8db0-0a4fd3484e47_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VpP2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdfe23d02-e6c7-4b20-abaf-215fc86d0995_5760x3240.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VpP2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdfe23d02-e6c7-4b20-abaf-215fc86d0995_5760x3240.png 424w, https://substackcdn.com/image/fetch/$s_!VpP2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdfe23d02-e6c7-4b20-abaf-215fc86d0995_5760x3240.png 848w, https://substackcdn.com/image/fetch/$s_!VpP2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdfe23d02-e6c7-4b20-abaf-215fc86d0995_5760x3240.png 1272w, https://substackcdn.com/image/fetch/$s_!VpP2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdfe23d02-e6c7-4b20-abaf-215fc86d0995_5760x3240.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VpP2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdfe23d02-e6c7-4b20-abaf-215fc86d0995_5760x3240.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dfe23d02-e6c7-4b20-abaf-215fc86d0995_5760x3240.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2687709,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.cosmos-institute.org/i/185062465?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdfe23d02-e6c7-4b20-abaf-215fc86d0995_5760x3240.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VpP2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdfe23d02-e6c7-4b20-abaf-215fc86d0995_5760x3240.png 424w, https://substackcdn.com/image/fetch/$s_!VpP2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdfe23d02-e6c7-4b20-abaf-215fc86d0995_5760x3240.png 848w, https://substackcdn.com/image/fetch/$s_!VpP2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdfe23d02-e6c7-4b20-abaf-215fc86d0995_5760x3240.png 1272w, https://substackcdn.com/image/fetch/$s_!VpP2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdfe23d02-e6c7-4b20-abaf-215fc86d0995_5760x3240.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Every builder&#8217;s first duty is philosophical: to decide what they should build for. This series asks 9 questions to founders who are building towards their vision of the human good.</em></p><p>Today&#8217;s guest is Zena Hitz. Zena is the founder of <a href="https://catherineproject.org/">Catherine Project,</a> which builds communities of learning through online courses and reading groups.</p><p>Zena is also a Tutor at St. John&#8217;s College and the author of<em> <a href="https://www.amazon.co.uk/Lost-Thought-Hidden-Pleasures-Intellectual/dp/0691178712">Lost in Thought: The Hidden Pleasures of an Intellectual Life</a></em>.</p><div><hr></div><h4><strong>1. What are the core questions or beliefs driving your work?</strong></h4><p>I founded Catherine Project because I had the sentimental idea that learning and thinking for its own sake is a basic human need. Since I've founded it, and we've seen thousands of people come to us, I believe it is actually true. Our educational institutions have largely forgotten this fact, if they ever knew it. </p><p>The other reason I wanted the Project to be open to anyone who was interested was to see for myself how much interest there really is. My conclusion: there's a lot. I decided not to charge tuition, because learning only happens when the learner decides to do it. It can&#8217;t be bought, even if some opportunities for it can be. If we charged tuition, they&#8217;d expect us to do their learning for them.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ga-e!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc96be0a6-bb12-460e-8d0e-b1ae96c9a41f_1410x996.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ga-e!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc96be0a6-bb12-460e-8d0e-b1ae96c9a41f_1410x996.png 424w, https://substackcdn.com/image/fetch/$s_!ga-e!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc96be0a6-bb12-460e-8d0e-b1ae96c9a41f_1410x996.png 848w, https://substackcdn.com/image/fetch/$s_!ga-e!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc96be0a6-bb12-460e-8d0e-b1ae96c9a41f_1410x996.png 1272w, https://substackcdn.com/image/fetch/$s_!ga-e!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc96be0a6-bb12-460e-8d0e-b1ae96c9a41f_1410x996.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ga-e!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc96be0a6-bb12-460e-8d0e-b1ae96c9a41f_1410x996.png" width="689" height="486.69787234042553" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c96be0a6-bb12-460e-8d0e-b1ae96c9a41f_1410x996.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:996,&quot;width&quot;:1410,&quot;resizeWidth&quot;:689,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ga-e!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc96be0a6-bb12-460e-8d0e-b1ae96c9a41f_1410x996.png 424w, https://substackcdn.com/image/fetch/$s_!ga-e!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc96be0a6-bb12-460e-8d0e-b1ae96c9a41f_1410x996.png 848w, https://substackcdn.com/image/fetch/$s_!ga-e!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc96be0a6-bb12-460e-8d0e-b1ae96c9a41f_1410x996.png 1272w, https://substackcdn.com/image/fetch/$s_!ga-e!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc96be0a6-bb12-460e-8d0e-b1ae96c9a41f_1410x996.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">A selection of the books read as part of Catherine Project</figcaption></figure></div><div><hr></div><h4><strong>2.</strong> What future are you building for?</h4><p>I can see two futures. In one, colleges and universities lose their last remnants of serious learning, and organizations like Catherine Project keep practices of serious learning alive for future generations.</p><p>In another future, organizations like mine display the enormous demand for liberal education that all of the educational leaders have been denying. That drives the old institutions to measures of reform and so colleges and universities return (or partially return) to their original mission.</p><div><hr></div><h4><strong>3. </strong>What commonly held belief in the tech community do you believe is wrong?</h4><p>Everyone seems to want &#8220;scale&#8221; and &#8220;impact&#8221; and seems to think that means having a massive operation. But doing something right on a small scale breeds imitators&#8212;it is like a seed. Great Books institutions were always small, but they have been enormously influential. There&#8217;s nothing more &#8220;scalable&#8221; than wheat or bread, but no one owns all the wheat nor the bread-making recipes. It doesn&#8217;t take much thinking to see that this is for the best.</p><p>Technology only works for human flourishing if we choose to design it and to use it that way. We need to think about how we want to live&#8212;in common as well as individually&#8212;and choose accordingly. Likewise, money isn&#8217;t the most important thing. To build trust, you need to care about something that recognizably benefits more than just you and your friends.</p><div class="pullquote"><p>&#8220;Doing something right on a small scale breeds imitators&#8212;it is like a seed&#8221;</p></div><h4>4. What are your main philosophical influences?</h4><p>For me the biggest puzzle is how intellectual excellence can and should shape our everyday choices and our public or communal projects. </p><p>Plato and Aristotle argued <em>eudaimonia</em> or human flourishing was constituted by the pursuit of theoretical philosophy. Moreover, they thought the wholehearted pursuit of theory, paradoxically, was the best guide to life. In part that's because <em>eudaimonia</em> is always the best guide to life. Mixed motives are not stable, and tend toward the dominance of the strongest and worst motive in the mix. There's only one pursuit that matters most to us. Since that motive shapes everything else, there's nothing more important than getting it right.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!63in!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F573da302-a9b8-4959-9304-d98b8f7ecf54_738x423.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!63in!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F573da302-a9b8-4959-9304-d98b8f7ecf54_738x423.png 424w, https://substackcdn.com/image/fetch/$s_!63in!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F573da302-a9b8-4959-9304-d98b8f7ecf54_738x423.png 848w, https://substackcdn.com/image/fetch/$s_!63in!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F573da302-a9b8-4959-9304-d98b8f7ecf54_738x423.png 1272w, https://substackcdn.com/image/fetch/$s_!63in!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F573da302-a9b8-4959-9304-d98b8f7ecf54_738x423.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!63in!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F573da302-a9b8-4959-9304-d98b8f7ecf54_738x423.png" width="523" height="299.7682926829268" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/573da302-a9b8-4959-9304-d98b8f7ecf54_738x423.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:423,&quot;width&quot;:738,&quot;resizeWidth&quot;:523,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!63in!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F573da302-a9b8-4959-9304-d98b8f7ecf54_738x423.png 424w, https://substackcdn.com/image/fetch/$s_!63in!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F573da302-a9b8-4959-9304-d98b8f7ecf54_738x423.png 848w, https://substackcdn.com/image/fetch/$s_!63in!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F573da302-a9b8-4959-9304-d98b8f7ecf54_738x423.png 1272w, https://substackcdn.com/image/fetch/$s_!63in!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F573da302-a9b8-4959-9304-d98b8f7ecf54_738x423.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h4>5. What does human flourishing mean to you?</h4><p>It means the exercise of our best capacities, the power to think and the power to love. We develop those capacities by cultivating the virtues like courage, generosity, prudence, moderation, wisdom.</p><p>We need communities on our scale&#8212;think large families or small towns&#8212;to provide training courses in virtue and meaningful paths of exercise for our activities. Poor and humble people often flourish more fully than the rich and powerful because they rely more heavily on their communities and live more intimately with them. That&#8217;s where the best of life is. I came to see this from joining the Roman Catholic Church but I think the other major religions teach it. I&#8217;m not sure anyone else does.</p><div><hr></div><h4>6. What&#8217;s one book you&#8217;ve read recently that you&#8217;d recommend?</h4><p>I&#8217;m not alone in thinking the <a href="https://en.wikipedia.org/wiki/The_Years_of_Lyndon_Johnson">Robert Caro biographies of Lyndon Johnson</a> are incredible. </p><p>They display perfectly the tensions between character and ambition. Caro is never glib and he never takes an easy answer.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4GCr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F586594fe-f225-4e2b-a5cf-f10ce6787f08_1300x650.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4GCr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F586594fe-f225-4e2b-a5cf-f10ce6787f08_1300x650.png 424w, https://substackcdn.com/image/fetch/$s_!4GCr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F586594fe-f225-4e2b-a5cf-f10ce6787f08_1300x650.png 848w, https://substackcdn.com/image/fetch/$s_!4GCr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F586594fe-f225-4e2b-a5cf-f10ce6787f08_1300x650.png 1272w, https://substackcdn.com/image/fetch/$s_!4GCr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F586594fe-f225-4e2b-a5cf-f10ce6787f08_1300x650.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4GCr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F586594fe-f225-4e2b-a5cf-f10ce6787f08_1300x650.png" width="1300" height="650" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/586594fe-f225-4e2b-a5cf-f10ce6787f08_1300x650.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:650,&quot;width&quot;:1300,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4GCr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F586594fe-f225-4e2b-a5cf-f10ce6787f08_1300x650.png 424w, https://substackcdn.com/image/fetch/$s_!4GCr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F586594fe-f225-4e2b-a5cf-f10ce6787f08_1300x650.png 848w, https://substackcdn.com/image/fetch/$s_!4GCr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F586594fe-f225-4e2b-a5cf-f10ce6787f08_1300x650.png 1272w, https://substackcdn.com/image/fetch/$s_!4GCr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F586594fe-f225-4e2b-a5cf-f10ce6787f08_1300x650.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h4>7. What&#8217;s your most irrational belief?</h4><p>That I&#8212;my will and my talents&#8212;am the chief cause of my success. In fact it was mostly luck, grace, and animal spirits. Relatedly, I have the repeated delusion that whatever the problem is, I am the person best equipped to solve it. No matter how many times I see that proved false, the unsinkable belief always rises again and plagues me when I need it least.</p><div><hr></div><h4>8. What&#8217;s the most interesting tab you have open right now?</h4><p>I have a tab listing the famous social science studies that have not been replicated. We swim in fake science, so much that we can&#8217;t even make a simple argument without appealing to it. We so sorely need the arts of study, listening, reading, thinking, and judgment. &#8220;Studies show...&#8221; cannot replace a solid, well-informed human judgment.</p><div><hr></div><h4>9. Who is one writer or thinker today who you think is underrated?</h4><p>Well, to add a spin on the question&#8230;we are living, active beings who can feed our reflection and imagination on anything that has been thought or said. </p><p>And so I think the most underrated thinker is whatever great thinker of the past you haven&#8217;t read yet&#8212;or wherever your understanding is most limited. </p><p>For me, it's the Enlightenment authors I know least. I have been gradually untangling Descartes' philosophy and science through my teaching at St. John's and I find it very exciting.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nw_a!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faea083a3-5fe0-4d98-8db0-0a4fd3484e47_1200x630.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nw_a!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faea083a3-5fe0-4d98-8db0-0a4fd3484e47_1200x630.jpeg 424w, https://substackcdn.com/image/fetch/$s_!nw_a!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faea083a3-5fe0-4d98-8db0-0a4fd3484e47_1200x630.jpeg 848w, https://substackcdn.com/image/fetch/$s_!nw_a!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faea083a3-5fe0-4d98-8db0-0a4fd3484e47_1200x630.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!nw_a!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faea083a3-5fe0-4d98-8db0-0a4fd3484e47_1200x630.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nw_a!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faea083a3-5fe0-4d98-8db0-0a4fd3484e47_1200x630.jpeg" width="1200" height="630" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/aea083a3-5fe0-4d98-8db0-0a4fd3484e47_1200x630.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:630,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Hitz, Zena | Princeton University Press&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Hitz, Zena | Princeton University Press" title="Hitz, Zena | Princeton University Press" srcset="https://substackcdn.com/image/fetch/$s_!nw_a!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faea083a3-5fe0-4d98-8db0-0a4fd3484e47_1200x630.jpeg 424w, https://substackcdn.com/image/fetch/$s_!nw_a!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faea083a3-5fe0-4d98-8db0-0a4fd3484e47_1200x630.jpeg 848w, https://substackcdn.com/image/fetch/$s_!nw_a!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faea083a3-5fe0-4d98-8db0-0a4fd3484e47_1200x630.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!nw_a!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faea083a3-5fe0-4d98-8db0-0a4fd3484e47_1200x630.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><p><em>Thanks to Zena for answering &#8220;What Will You Build For?&#8221;</em></p><p><em>To get in touch, find her on <a href="https://x.com/zenahitz?lang=en">X</a> or at <a href="https://catherineproject.org/">Catherine Project</a>.</em></p><p><em>This is the third installment in this interview series, see <a href="https://blog.cosmos-institute.org/p/what-will-you-build-for-rune-kvist">our first interview</a> with AI Underwriting Company co-founder Rune Kvist and <a href="https://blog.cosmos-institute.org/p/what-will-you-build-for-zoe-weinberg">our second interview</a> with ex/ante founder Zoe Weinberg.</em></p><div><hr></div><p><a href="https://cosmos-institute.org/">Cosmos Institute</a> is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for updates and essays:</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>If you&#8217;re someone who thinks deeply, builds deliberately, and cares about the future AI is shaping: <a href="https://cosmos-institute.org/">join the Cosmos network</a>.</p><p>To nominate someone for &#8220;What Will You Build <em>For</em>?&#8221; leave a comment below, or <a href="https://x.com/Cosmos_Inst">send us a DM</a>.</p>]]></content:encoded></item><item><title><![CDATA[What Will You Build For: Zoe Weinberg]]></title><description><![CDATA[Investing in human agency.]]></description><link>https://blog.cosmos-institute.org/p/what-will-you-build-for-zoe-weinberg</link><guid isPermaLink="false">https://blog.cosmos-institute.org/p/what-will-you-build-for-zoe-weinberg</guid><dc:creator><![CDATA[Cosmos Institute]]></dc:creator><pubDate>Tue, 03 Feb 2026 15:03:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!OeC9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcad099ca-3bc9-41f5-b969-5efd628ce20d_711x711.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zYfh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dc382df-aa95-49d7-aab9-8017f20058c6_5760x3240.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zYfh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dc382df-aa95-49d7-aab9-8017f20058c6_5760x3240.png 424w, https://substackcdn.com/image/fetch/$s_!zYfh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dc382df-aa95-49d7-aab9-8017f20058c6_5760x3240.png 848w, https://substackcdn.com/image/fetch/$s_!zYfh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dc382df-aa95-49d7-aab9-8017f20058c6_5760x3240.png 1272w, https://substackcdn.com/image/fetch/$s_!zYfh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dc382df-aa95-49d7-aab9-8017f20058c6_5760x3240.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zYfh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dc382df-aa95-49d7-aab9-8017f20058c6_5760x3240.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8dc382df-aa95-49d7-aab9-8017f20058c6_5760x3240.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3999522,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.cosmos-institute.org/i/185061383?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dc382df-aa95-49d7-aab9-8017f20058c6_5760x3240.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zYfh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dc382df-aa95-49d7-aab9-8017f20058c6_5760x3240.png 424w, https://substackcdn.com/image/fetch/$s_!zYfh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dc382df-aa95-49d7-aab9-8017f20058c6_5760x3240.png 848w, https://substackcdn.com/image/fetch/$s_!zYfh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dc382df-aa95-49d7-aab9-8017f20058c6_5760x3240.png 1272w, https://substackcdn.com/image/fetch/$s_!zYfh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dc382df-aa95-49d7-aab9-8017f20058c6_5760x3240.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Every builder&#8217;s first duty is philosophical: to decide what they should build for. This series asks 9 questions to founders who are building towards their vision of the human good.</em></p><p>Today&#8217;s guest is <a href="https://www.linkedin.com/in/z-weinberg/">Zoe Weinberg</a>. Zoe is the founder of <a href="http://buildexante.com/">ex/ante</a>, an early-stage venture fund backing technology that advances human agency and digital freedom. She talks more about her work in <a href="https://www.linkedin.com/posts/z-weinberg_when-my-dear-friend-brendan-mccord-told-me-activity-7401987356440027136-dui6?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAzTBosBCd9xUylHWmiGndwl4t9kTE3fFS8">this short video</a>.</p><p>Before ex/ante, Zoe worked on ethics and policy issues at the National Security Commission on Artificial Intelligence, Google AI, and the World Bank.</p><div><hr></div><h4><strong>1. What are the core questions or beliefs driving your work?</strong></h4><p>It&#8217;s tempting to fall into standard narratives of technology as bringing either salvation or apocalypse. It&#8217;s all a bit Manichean&#8212;a black-and-white view of technology as good or evil, often in those stark terms. </p><p>I&#8217;m more interested in questions of how technology can subtly reshape human agency, human rights, and human flourishing, all depending on how it&#8217;s built. Technology can democratic or authoritarian. It can expand freedom or constrict it. It can advance open societies. Or not. There&#8217;s very little that&#8217;s preordained. There&#8217;s very little that&#8217;s not ultimately in our hands as builders, funders, and users.</p><div><hr></div><h4><strong>2.</strong> What future are you building for?</h4><p>Through my work at <a href="https://www.buildexante.com">ex/ante</a>, I&#8217;m trying to contribute to a future that centers individual autonomy, freedom, and choice, counteracting a digital world that feels increasingly circumscribed and controlled by governments and a handful of large tech companies.</p><p>This means a world where systems make knowledge and capability more accessible, and where the friction and difficulty that often accompanies deep understanding isn&#8217;t eliminated, but is part of what creates meaning and joy.</p><p>One version of this was captured recently in the <a href="https://resonantcomputing.org">Resonant Computing Manifesto</a>. This suggested that artificial intelligence can give rise to digital experiences that nourish, rather than drain, us if we build with intention and thoughtfulness. I&#8217;m building for the future where innovation isn&#8217;t just for innovation&#8217;s sake, but toward the creation of a healthier and more free society.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OeC9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcad099ca-3bc9-41f5-b969-5efd628ce20d_711x711.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OeC9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcad099ca-3bc9-41f5-b969-5efd628ce20d_711x711.png 424w, https://substackcdn.com/image/fetch/$s_!OeC9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcad099ca-3bc9-41f5-b969-5efd628ce20d_711x711.png 848w, https://substackcdn.com/image/fetch/$s_!OeC9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcad099ca-3bc9-41f5-b969-5efd628ce20d_711x711.png 1272w, https://substackcdn.com/image/fetch/$s_!OeC9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcad099ca-3bc9-41f5-b969-5efd628ce20d_711x711.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OeC9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcad099ca-3bc9-41f5-b969-5efd628ce20d_711x711.png" width="415" height="415" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cad099ca-3bc9-41f5-b969-5efd628ce20d_711x711.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:711,&quot;width&quot;:711,&quot;resizeWidth&quot;:415,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OeC9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcad099ca-3bc9-41f5-b969-5efd628ce20d_711x711.png 424w, https://substackcdn.com/image/fetch/$s_!OeC9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcad099ca-3bc9-41f5-b969-5efd628ce20d_711x711.png 848w, https://substackcdn.com/image/fetch/$s_!OeC9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcad099ca-3bc9-41f5-b969-5efd628ce20d_711x711.png 1272w, https://substackcdn.com/image/fetch/$s_!OeC9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcad099ca-3bc9-41f5-b969-5efd628ce20d_711x711.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h4><strong>3. </strong>What commonly held belief in the tech community do you believe is wrong?</h4><p>In the tech world I often hear people say that technology is value-neutral: it&#8217;s how people <em>use</em> technology that can have good or bad consequences. I disagree.</p><p>Technology is purpose-built, and therefore is embedded with intent. What&#8217;s its purpose? How&#8217;s it meant to be used? Technical architecture always embodies a set of values-laden choices: the design of protocols, interfaces, and systems that don&#8217;t just enable certain actions but actively shape the distribution of power, the visibility of alternatives, and the very modes by which users understand their options.</p><p>There&#8217;s a million little decisions that go into product- and company-building that ultimately have a moral valence. To make some of those decisions easier, ex/ante and the Ford Foundation released <a href="https://www.builder-blueprints.com">Builder Blueprints</a>, a library of templates and documents for start-ups to raise the bar in terms of privacy and user rights. It&#8217;s just one piece of the equation, but it is all part of the foundation on which a company is built.</p><p>Calling technology value-neutral is a cop-out.</p><div class="pullquote"><p>&#8220;Technical architecture always embodies a set of values-laden choices.&#8221;</p></div><h4>4. What are your main philosophical influences?</h4><p>I will never forget the first time I read Thomas Kuhn&#8217;s <a href="https://www.amazon.com/Structure-Scientific-Revolutions-50th-Anniversary/dp/0226458121">Structure of Scientific Revolutions</a>. I am often reminded that basic assumptions about the world that we take as &#8220;science&#8221; and &#8220;fact&#8221; are in reality socially and historically-contingent. Science and progress are hardly linear, and new (often contrarian) perspectives are critical to improving our understanding of the world. In the context of my work at ex/ante, I often think about what technological developments on the far edges of the horizon could substantively change my mental model of how the world works or contribute to a true paradigm-shift.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pO1g!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed9e16c6-b097-4904-b1d7-e8a06b693ddd_647x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pO1g!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed9e16c6-b097-4904-b1d7-e8a06b693ddd_647x1000.png 424w, https://substackcdn.com/image/fetch/$s_!pO1g!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed9e16c6-b097-4904-b1d7-e8a06b693ddd_647x1000.png 848w, https://substackcdn.com/image/fetch/$s_!pO1g!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed9e16c6-b097-4904-b1d7-e8a06b693ddd_647x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!pO1g!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed9e16c6-b097-4904-b1d7-e8a06b693ddd_647x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pO1g!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed9e16c6-b097-4904-b1d7-e8a06b693ddd_647x1000.png" width="289" height="446.676970633694" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ed9e16c6-b097-4904-b1d7-e8a06b693ddd_647x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:647,&quot;resizeWidth&quot;:289,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pO1g!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed9e16c6-b097-4904-b1d7-e8a06b693ddd_647x1000.png 424w, https://substackcdn.com/image/fetch/$s_!pO1g!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed9e16c6-b097-4904-b1d7-e8a06b693ddd_647x1000.png 848w, https://substackcdn.com/image/fetch/$s_!pO1g!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed9e16c6-b097-4904-b1d7-e8a06b693ddd_647x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!pO1g!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed9e16c6-b097-4904-b1d7-e8a06b693ddd_647x1000.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Many of the other thinkers and writers that have influenced my work are represented in the <a href="https://on.liminary.io/c/human-agency-library">Tech for Human Agency Library</a>, including John Stuart Mill, Alexis de Tocqueville, Michel Foucault, Marshall McLuhan and many others.</p><div><hr></div><h4>5. What does human flourishing mean to you?</h4><p>For me, human flourishing is rooted in human agency: the ability to make substantive decisions about one&#8217;s life. It requires a social structure that enables individual freedom, individual judgment, and pursuit of individual purpose.</p><p>&#8220;Individual&#8221; is the key word. Any infrastructure (technological, cultural, political or otherwise) that imposes uniformity diminishes the conditions for human development and progress. Self-authorship emerges from the struggle to understand and solve genuinely hard problems, to navigate moral complexity, to confront perspectives that are at odds with one&#8217;s worldview. On a societal level, human flourishing is necessarily plural.</p><p>The question for our moment is whether technology can expand or contract the possibilities for self-authorship and whether we can design these systems to preserve the productive friction that gives rise to human flourishing. On an individual level, human flourishing is less about achieving any particular state of happiness and more about having the capacity for self-authorship. </p><div><hr></div><h4>6. What&#8217;s one book you&#8217;ve read recently that you&#8217;d recommend?</h4><p><em><a href="https://www.penguinrandomhouse.com/books/721361/second-life-by-amanda-hess/">Second Life: Having a Child in the Digital Age</a></em>, by Amanda Hess. </p><p>I recently became a parent, so I appreciated this memoir about the ways that the digital world has shaped one of life&#8217;s biggest milestones, by an internet culture critic for <em>The</em> <em>New York Times</em>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pKx2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91756b74-f2cd-4027-bd31-62ce1f190b09_662x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pKx2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91756b74-f2cd-4027-bd31-62ce1f190b09_662x1000.png 424w, https://substackcdn.com/image/fetch/$s_!pKx2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91756b74-f2cd-4027-bd31-62ce1f190b09_662x1000.png 848w, https://substackcdn.com/image/fetch/$s_!pKx2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91756b74-f2cd-4027-bd31-62ce1f190b09_662x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!pKx2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91756b74-f2cd-4027-bd31-62ce1f190b09_662x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pKx2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91756b74-f2cd-4027-bd31-62ce1f190b09_662x1000.png" width="292" height="441.08761329305133" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/91756b74-f2cd-4027-bd31-62ce1f190b09_662x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:662,&quot;resizeWidth&quot;:292,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pKx2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91756b74-f2cd-4027-bd31-62ce1f190b09_662x1000.png 424w, https://substackcdn.com/image/fetch/$s_!pKx2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91756b74-f2cd-4027-bd31-62ce1f190b09_662x1000.png 848w, https://substackcdn.com/image/fetch/$s_!pKx2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91756b74-f2cd-4027-bd31-62ce1f190b09_662x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!pKx2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91756b74-f2cd-4027-bd31-62ce1f190b09_662x1000.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h4>7. What&#8217;s your most irrational belief?</h4><p>My irrational prediction of an irrational belief is that we will see the emergence of a religious movement centered on worship of AI and superintelligence. A great awakening for the 21st century.</p><div><hr></div><h4>8. What&#8217;s the most interesting tab you have open right now?</h4><p>The website for the <a href="https://alignmentalignment.ai/caaac">Center for the Alignment of AI Alignment Centers</a>. I am a big believer in satire and humor as a method for sense-making. I also think creator Louis Barclay is hilarious and brilliant.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yz72!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26714a2b-0329-4b5b-b7c1-876f0acbb37e_1600x784.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yz72!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26714a2b-0329-4b5b-b7c1-876f0acbb37e_1600x784.png 424w, https://substackcdn.com/image/fetch/$s_!yz72!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26714a2b-0329-4b5b-b7c1-876f0acbb37e_1600x784.png 848w, https://substackcdn.com/image/fetch/$s_!yz72!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26714a2b-0329-4b5b-b7c1-876f0acbb37e_1600x784.png 1272w, https://substackcdn.com/image/fetch/$s_!yz72!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26714a2b-0329-4b5b-b7c1-876f0acbb37e_1600x784.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yz72!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26714a2b-0329-4b5b-b7c1-876f0acbb37e_1600x784.png" width="1456" height="713" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/26714a2b-0329-4b5b-b7c1-876f0acbb37e_1600x784.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:713,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yz72!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26714a2b-0329-4b5b-b7c1-876f0acbb37e_1600x784.png 424w, https://substackcdn.com/image/fetch/$s_!yz72!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26714a2b-0329-4b5b-b7c1-876f0acbb37e_1600x784.png 848w, https://substackcdn.com/image/fetch/$s_!yz72!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26714a2b-0329-4b5b-b7c1-876f0acbb37e_1600x784.png 1272w, https://substackcdn.com/image/fetch/$s_!yz72!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26714a2b-0329-4b5b-b7c1-876f0acbb37e_1600x784.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h4>9. Who is one writer or thinker today who you think is underrated?</h4><p>An idea that is underrated is the <a href="https://www.niskanencenter.org/wp-content/uploads/old_uploads/2018/04/Final_Free-Market-Welfare-State.pdf">Free-Market Welfare State</a>. It&#8217;s a concept described by Samuel Hammond in 2018 policy paper, suggesting that a strong social safety net can encourage more innovation, risk-taking, and entrepreneurism.</p><p>By giving citizens guaranteed support, such as health insurance and retirement security, we remove the pressure for protectionism and  intervention during periods of technological and economic disruption. At the same time, we empower individuals to take risks, knowing they and their families won&#8217;t slip through the cracks. The proposal doesn&#8217;t fit in a neat partisan box, which perhaps is why it does not get nearly the attention it deserves as an economic policy solution.</p><p>Innovation and progress <em>depend</em> on the willingness of individuals to take risks, so policies that make taking a leap of faith easier are ones I&#8217;m likely to be in favour of.</p><div><hr></div><p><em>Thanks to Zoe for answering &#8220;What Will You Build For?&#8221; </em></p><p><em>To get in touch, find her on <a href="https://www.linkedin.com/in/z-weinberg/">LinkedIn</a> or at <a href="https://www.buildexante.com/">ex/ante</a>.</em></p><p><em>This is the second installment in this interview series, see <a href="https://blog.cosmos-institute.org/p/what-will-you-build-for-rune-kvist">our first interview</a> with AI Underwriting Company co-founder Rune Kvist.</em></p><div><hr></div><p><a href="https://cosmos-institute.org/">Cosmos Institute</a> is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for updates and essays:</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>If you&#8217;re someone who thinks deeply, builds deliberately, and cares about the future AI is shaping&#8212;<a href="https://cosmos-institute.org/">join the Cosmos network</a>.</p><p>To nominate someone for &#8220;What Will You Build <em>For</em>?&#8221; leave a comment below, or <a href="https://x.com/Cosmos_Inst">send us a DM</a>.</p>]]></content:encoded></item><item><title><![CDATA[On the Noble Uses of AI]]></title><description><![CDATA[When Cognitive Offloading Elevates Us]]></description><link>https://blog.cosmos-institute.org/p/on-the-noble-uses-of-ai</link><guid isPermaLink="false">https://blog.cosmos-institute.org/p/on-the-noble-uses-of-ai</guid><dc:creator><![CDATA[Cosmos Institute]]></dc:creator><pubDate>Fri, 30 Jan 2026 15:04:02 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!w_6y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15938db0-ef5f-4ed3-be04-dbd8637b4280_1186x872.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This is a guest post by <a href="https://www.kevinvallier.com/">Kevin Vallier</a>, a Professor of Philosophy at the University of Toledo and Director of Research at the Institute of American Constitutional Thought and Leadership.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!w_6y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15938db0-ef5f-4ed3-be04-dbd8637b4280_1186x872.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!w_6y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15938db0-ef5f-4ed3-be04-dbd8637b4280_1186x872.jpeg 424w, https://substackcdn.com/image/fetch/$s_!w_6y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15938db0-ef5f-4ed3-be04-dbd8637b4280_1186x872.jpeg 848w, https://substackcdn.com/image/fetch/$s_!w_6y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15938db0-ef5f-4ed3-be04-dbd8637b4280_1186x872.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!w_6y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15938db0-ef5f-4ed3-be04-dbd8637b4280_1186x872.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!w_6y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15938db0-ef5f-4ed3-be04-dbd8637b4280_1186x872.jpeg" width="1186" height="872" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/15938db0-ef5f-4ed3-be04-dbd8637b4280_1186x872.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:872,&quot;width&quot;:1186,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:430330,&quot;alt&quot;:&quot;Pieter Bruegel the Elder - The Harvesters - The Metropolitan Museum of Art&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Pieter Bruegel the Elder - The Harvesters - The Metropolitan Museum of Art" title="Pieter Bruegel the Elder - The Harvesters - The Metropolitan Museum of Art" srcset="https://substackcdn.com/image/fetch/$s_!w_6y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15938db0-ef5f-4ed3-be04-dbd8637b4280_1186x872.jpeg 424w, https://substackcdn.com/image/fetch/$s_!w_6y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15938db0-ef5f-4ed3-be04-dbd8637b4280_1186x872.jpeg 848w, https://substackcdn.com/image/fetch/$s_!w_6y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15938db0-ef5f-4ed3-be04-dbd8637b4280_1186x872.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!w_6y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F15938db0-ef5f-4ed3-be04-dbd8637b4280_1186x872.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Pieter Bruegel the Elder, <em>The</em> <em>Harvesters</em> (1565)  </figcaption></figure></div><p>Pepper was once a luxury. Medieval merchants traveled halfway around the world to bring it to European tables. Now it sits in shakers at every restaurant, free for the taking. It became cheap because people wanted to put it on everything. </p><p>Intelligence is becoming pepper.</p><p>As Sam Altman <a href="https://blog.samaltman.com/the-gentle-singularity">puts it</a>, intelligence &#8220;too cheap to meter.&#8221; Open-source models already make raw cognitive power free at the point of use. Soon it will be everywhere, layered over every interaction, and available for every task. Our operative intelligence, the combination of biological and artificial neural networks, will expand dramatically. But as artificial intelligence becomes cheap, biological intelligence may decline. A society can grow smarter while its members&#8217; skills atrophy.</p><p>Most people find this prospect irredeemably bad. I think some atrophy is acceptable and sometimes beneficial. This thesis is surprisingly moderate. We already accept strategic cognitive atrophy when the trade-offs are right. Calculators have led to the atrophy of mental arithmetic capacity, yet mathematicians today are better at mathematics than ever. They gave up tedium and they gained something greater: the ability to solve harder problems. </p><p>Mathematicians now routinely use computer algebra systems (such as Mathematica) and programming languages (such as Lean) to support mathematical reasoning. This lets them explore problems they could not before. The <a href="https://en.wikipedia.org/wiki/Four_color_theorem">four-color theorem</a> was proven in 1976 with computer assistance, as no human could have checked all the cases themselves. This pattern is ancient. Writing hurts our memory, but no one wants to stop people from writing. This precedent shows that cognitive offloading can enhance human flourishing when properly structured.</p><p>AI <em>will</em> cause some cognitive atrophy. Offloading is inevitable when intelligence becomes free. We&#8217;ll need to decide which forms of atrophy we should accept given their inevitability.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe to Cosmos Institute for updates including opportunities, content, and programs</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>Trade-offs</h3><p>John Stuart Mill and Aristotle help clarify when these trade-offs prove acceptable. Mill distinguished between higher and lower-order pleasures. By higher pleasures he wasn&#8217;t talking about &#8220;refined&#8221; tastes or elite entertainment. He meant the pleasures that engage our <em>distinctively human capacities</em>: reasoning, imagination, and moral feeling. As Mill put it in <em>Utilitarianism</em>, we &#8220;assign to the pleasures of the intellect, of the feelings and imagination, and of the moral sentiments, a much higher value as pleasures than to those of mere sensation.&#8221;</p><p>Consider someone who has experienced the satisfaction of a nice-tasting meal and the satisfaction of solving a difficult problem. If she is honest, she will rank the second higher. These provide a deeper satisfaction.</p><p>This is Mill&#8217;s &#8220;competent judges&#8221; test. Ask anyone who has genuinely experienced both kinds of pleasure which they would give up. They will sacrifice bodily pleasures first. The fool may be content. But Socrates, even when dissatisfied, has something the fool will never have.</p><p>Aristotle made a similar observation over 2,000 years prior. He distinguished three lives: the life of pleasure, the life of political action, and the life of contemplation. The life of pleasure seeks bodily satisfaction. The life of action seeks honor and achievement. But the contemplative life seeks truth itself.</p><p>Contemplation is not passive. It is the most intense activity the mind can perform. It means thinking about the highest things: the structure of reality, the nature of the good, and the order of the cosmos. This activity is not labor-intensive like farming or manufacturing, and it&#8217;s not leisure either. It is the mind at full stretch.</p><p>Aristotle thought most people could not sustain a contemplative life. The demands of survival intervened. But he also thought that insofar as we can contemplate, we should. It is what we are for.</p><p>This is where AI&#8217;s promise shines through. It can handle lower-order cognitive tasks, freeing us for activities that engage our highest capacities. A researcher who once spent hours hunting for sources can now spend those hours thinking about what the sources mean. A writer who labored over formatting can focus on whether the argument is true.</p><p>Of course, people may not do this. We all know that AI can seduce us into passivity. But I would argue that it also elevates the opportunities for virtue. The mind freed from drudgery really can rise. It can shift us from the mechanical to the meaningful. If we follow the competent judges test, as Mill would have us, we know that most people do not prefer the mechanical once they have moved beyond it. Many people really do have the time to cultivate virtue and devote themselves to pursuits that satisfy them and that achieve their highest values. AI gives us the chance to do that.</p><p>Following Mill and Aristotle, cognitive offloading is acceptable only when it preserves these capacities. It can become noble when it enhances them.</p><h3>What to Offload</h3><p>Consider two patients facing the same diagnosis. The first asks an AI what treatment to pursue and then simply does it. She has outsourced both the information-gathering <em>and</em> the judgment itself. Her capacity for medical reasoning atrophies because she never exercises it. She cannot evaluate what the AI told her. She cannot ask her doctor the right questions. When the AI errs, she has no way to catch its mistake.</p><p>The second patient uses AI differently. She conducts an extensive medical review, reading research reports that the AI surfaces. She generates a list of questions for her doctor based on what she learned, brings the reports to the appointment, and discusses them. The doctor makes the call, but with a better-informed patient in the room.</p><p>Both patients offloaded cognition. But the second preserved her deliberative capacity. She did not hunt for the research herself (there&#8217;s no free cognitive lunch), but she did something harder: evaluate it. Her speed at reading dense medical literature may atrophy. But her ability to weigh trade-offs, to question authority, and to integrate information grows stronger.</p><p>Offloading exists on a spectrum. The same pattern emerges in education, but with deliberation more fully intact. A student can have AI do her homework and then pass it off as her own. She speaks fluently about concepts she doesn&#8217;t understand. The appearance of intelligence replaces its reality. Like muscles that atrophy without use, her reasoning capacity withers.</p><p>Or she can use AI to challenge herself. Study mode forces her to work through problems rather than receiving answers. The AI asks questions, corrects misconceptions, and refuses to simply hand over solutions. Each session builds her capacity. She&#8217;s still using AI. But she&#8217;s building human thinking on top of AI thinking.</p><p>Autopilot shows what happens when atrophy goes wrong. Modern pilots offload constantly, and for good reason; automated systems handle airspeed, altitude, and navigation almost all of the time. The pilot watches the machines work, which reduces fatigue. Flights have become far safer as a result. Still, there is a loss. As we know from the movies, hand-flying an aircraft keeps pilots in the loop, letting them feel the aircraft&#8217;s responses intimately. That helps create a continuous mental model of where the plane is located and what comes next. Autopilot allows for drift, and the mental model fades.</p><p>Air France 447 is illustrative. In 2009, over the mid-Atlantic, the aircraft&#8217;s pitot tubes iced over, and the autopilot disconnected. The pilots had to hand-fly in unusual conditions and made simple mistakes. The aircraft stalled, but the pilot pulled the nose up. The flight had 228 people. None survived.</p><p>The pilots had thousands of hours of flying time between them, but their critical skills atrophied from disuse. Automation makes us safer while making rare emergencies devastating. The mathematician who offloads arithmetic still knows how to solve hard problems. The pilots who offloaded flying could not fly when it mattered.</p><p>What distinguishes good offloading from bad? The examples suggest an answer. Offload mechanical cognition: the tedious, repetitive operations that don&#8217;t require judgment. Preserve core deliberative capacities: the ability to evaluate, choose, and reason through hard cases. Make sure evaluating AI outputs requires active engagement. Expand higher-level intellectual activity. And preserve cognitive sovereignty: maintain the ability to contest what the AI tells you, to understand how it reached its conclusions, and to exit when you need to.</p><h3>Permissible atrophy</h3><p>In my previous essay for Cosmos, I argued that <a href="https://blog.cosmos-institute.org/p/intelligence-environments">intelligence environments</a> require contestability, transparency, and exit rights. Permissible atrophy is the next question, and it is not just a personal one.</p><p>We are designing intelligence environments for one another. A teacher who assigns AI tools shapes what her students become. An employer who deploys AI assistants shapes what their workers can do. These choices become the formal and informal rules that govern how we think. </p><p>We have freed people from drudgery before. Some wasted the freedom. Others turned toward higher things. AI can go either way. Build it wrong, and we may create sophisticated parrots: fluent, confident, and hollow.</p><p>Pepper became cheap because people wanted to put it on everything. Intelligence is becoming cheap for the same reason. How we embed it will determine who we become.</p><div><hr></div><p><em><a href="https://cosmos-institute.org/">Cosmos Institute</a> is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund fast prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for updates and essays:</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Join the Cosmos Institute team]]></title><description><![CDATA[New operations, finance, and talent-focused roles - applications open!]]></description><link>https://blog.cosmos-institute.org/p/join-the-cosmos-institute-team</link><guid isPermaLink="false">https://blog.cosmos-institute.org/p/join-the-cosmos-institute-team</guid><dc:creator><![CDATA[Cosmos Institute]]></dc:creator><pubDate>Wed, 28 Jan 2026 19:05:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Z27r!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30cb2fc6-ee8b-4841-80f5-2a03cd4359c3_2688x1792.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>[Update: Applications closed on March 2nd, 2026. If you&#8217;re interested in working with Cosmos pitch us a role <a href="https://cosmosinst.typeform.com/to/RYDheabE">here</a>]</p><p>Technological progress is forcing a question: <em>what kind of people do we want to become in a world where thinking is optional?</em></p><p>Cosmos Institute exists to answer that question, by developing <a href="https://blog.cosmos-institute.org/p/the-philosopher-builder">philosopher-builders</a> who can translate the principles of a free society into code. Now we&#8217;re growing <a href="https://cosmos-institute.org/#team">our team</a>.</p><p>In our <a href="https://blog.cosmos-institute.org/p/a-letter-year-end-2025">year-end letter</a>, we shared what we built in 2025: </p><ul><li><p>140+ grantees through our fast grants program</p></li><li><p>16 fellows working at the intersection of philosophy and frontier AI</p></li><li><p>14 seminars with partners like Oxford, DeepMind, and Liberty Fund</p></li><li><p>8 philosopher-builders creating new companies</p></li></ul><p>We&#8217;re looking for people who believe AI systems should <a href="https://blog.cosmos-institute.org/p/the-claude-boys">promote human autonomy</a>, <a href="https://blog.cosmos-institute.org/p/a-world-unobserved">enable truth-seeking</a>, and <a href="https://blog.cosmos-institute.org/p/the-philosophical-roots-of-decentralized">resist central control</a>, and who want to shape the institutions that make that vision real.</p><p>At Cosmos you&#8217;ll have direct exposure to <a href="https://cosmosgrants.org/winners">top AI thinkers and builders</a>, freedom to design how you work, and the chance to help shape the institutions and companies that will define the AI age.</p><p>We&#8217;re hiring for three core roles:</p><ul><li><p>Head of Operations</p></li><li><p>Head of Finance</p></li><li><p>Head of Talent and Network</p></li></ul><p>Details below &#8595;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Z27r!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30cb2fc6-ee8b-4841-80f5-2a03cd4359c3_2688x1792.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Z27r!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30cb2fc6-ee8b-4841-80f5-2a03cd4359c3_2688x1792.png 424w, https://substackcdn.com/image/fetch/$s_!Z27r!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30cb2fc6-ee8b-4841-80f5-2a03cd4359c3_2688x1792.png 848w, https://substackcdn.com/image/fetch/$s_!Z27r!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30cb2fc6-ee8b-4841-80f5-2a03cd4359c3_2688x1792.png 1272w, https://substackcdn.com/image/fetch/$s_!Z27r!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30cb2fc6-ee8b-4841-80f5-2a03cd4359c3_2688x1792.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Z27r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30cb2fc6-ee8b-4841-80f5-2a03cd4359c3_2688x1792.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/30cb2fc6-ee8b-4841-80f5-2a03cd4359c3_2688x1792.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8670917,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.cosmos-institute.org/i/185055146?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30cb2fc6-ee8b-4841-80f5-2a03cd4359c3_2688x1792.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Z27r!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30cb2fc6-ee8b-4841-80f5-2a03cd4359c3_2688x1792.png 424w, https://substackcdn.com/image/fetch/$s_!Z27r!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30cb2fc6-ee8b-4841-80f5-2a03cd4359c3_2688x1792.png 848w, https://substackcdn.com/image/fetch/$s_!Z27r!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30cb2fc6-ee8b-4841-80f5-2a03cd4359c3_2688x1792.png 1272w, https://substackcdn.com/image/fetch/$s_!Z27r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30cb2fc6-ee8b-4841-80f5-2a03cd4359c3_2688x1792.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h3><strong>Head of Operations</strong></h3><p><strong>Location</strong>: Remote-friendly (preference for Austin, New York, or London)</p><p>This role builds the operational infrastructure that allows Cosmos to scale: systems that work, a team that delivers, and a culture built on fast feedback loops.</p><p>You&#8217;ll own operational delivery for seminars, fellowships, grants cycles, and formation programs, including coordination for The Academy launch. You&#8217;ll build and maintain operations systems across Cosmos: payroll coordination, compliance, vendor management, HR systems, and reporting. You&#8217;ll establish the operational cadence that works across a distributed team, and build systems that make every cycle better than the last.</p><p>You&#8217;ll have budget for 1&#8211;2 direct reports in Year 1 and contract support as needed. As Cosmos scales, you&#8217;ll shape what the operations function becomes.</p><p>We&#8217;re looking for someone with 5-10+ years experience in operations or high-responsibility chief of staff roles who has personally touched payroll, HR systems, vendor management, and compliance. You take ownership of outcomes, have exceptional follow-through, and know the difference between what should be systematized and what should remain judgment-driven. <a href="https://cosmos-institute.org/careers/">Read more &#8594;</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://cosmosinst.typeform.com/head-of-ops&quot;,&quot;text&quot;:&quot;Apply&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://cosmosinst.typeform.com/head-of-ops"><span>Apply</span></a></p><div><hr></div><h3><strong>Head of Finance</strong></h3><p><strong>Location:</strong> Remote-friendly (preference for US-based)</p><p>This is Cosmos&#8217;s first dedicated finance hire: hands-on enough to run daily workflows, strategic enough to build a modern finance operating system that scales.</p><p>You&#8217;ll own core finance (accounting, AP/AR, close, reporting, controls, audit/990 coordination) and forward-looking finance (budgeting, forecasting, KPI dashboards). You&#8217;ll set up our in-house finance function, lead the relationship with our external audit firm, ensure clean tracking of restricted funds and program costs, and support fundraising with credible numbers. You&#8217;ll make finance approachable: clear docs, fast responses, calm execution.</p><p>We&#8217;re looking for a CPA (or equivalent) with 4&#8211;8+ years in accounting/finance, ideally with nonprofit exposure. You treat donor and grant dollars with real seriousness, and you&#8217;re a hands-on operator who&#8217;s happy to process transactions and design better systems. <a href="https://cosmos-institute.org/careers/">Read more &#8594;</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://cosmosinst.typeform.com/finance-lead&quot;,&quot;text&quot;:&quot;Apply&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://cosmosinst.typeform.com/finance-lead"><span>Apply</span></a></p><div><hr></div><h3><strong>Head of Talent and Network</strong></h3><p><strong>Location</strong>: Remote-friendly (preference for Austin or London)</p><p>Over 2025, Cosmos developed a network of hundreds of top technologists and thinkers: 16 Fellows spanning frontier AI, startups, and philosophy (including economist Tyler Cowen and Anthropic co-founder Jack Clark); 140+ grantees building prototypes and research; and 200+ attendees of deep-dive programs with partners like Oxford, Aspen Institute, and Liberty Fund.</p><p>This role takes that early strength and builds a compounding network, where the best people feel supported, make progress faster, publish and ship more, and pull in the next wave of talent.</p><p>You&#8217;ll identify and source exceptional people, assess and route them into the right pathways, and deliver value through introductions and opportunities that unlock projects, jobs, mentors, and collaborators. You&#8217;ll run the network operating system: the database, metrics, and workflows that turn messy relationship work into clean processes, freeing up your time for high-touch work.</p><p>We&#8217;re looking for a high-energy connector with 3&#8211;7+ years in talent, community building, partnerships, or accelerator/fellowship programs. You have taste and judgment: you can tell the difference between &#8220;impressive on paper&#8221; and &#8220;actually exceptional.&#8221; You can speak with researchers, founders, funders, and philosophers. <a href="https://cosmos-institute.org/careers/">Read more &#8594;</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://cosmosinst.typeform.com/talent-lead&quot;,&quot;text&quot;:&quot;Apply&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://cosmosinst.typeform.com/talent-lead"><span>Apply</span></a></p><div><hr></div><h3><strong>Pitch us a role</strong></h3><p>If you have a thesis on how you can contribute to our team outside of these areas, please get in touch.</p><p>We&#8217;re likely hiring soon for an operations role focused on the build out of an in-person Academy, and a researcher to contribute to work on AI, human autonomy, and societal impacts.</p><p>If our mission resonates, we&#8217;d love to hear from you.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://cosmosinst.typeform.com/to/RYDheabE&quot;,&quot;text&quot;:&quot;Pitch us a role&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://cosmosinst.typeform.com/to/RYDheabE"><span>Pitch us a role</span></a></p><div><hr></div><p><em><a href="https://cosmos-institute.org/">Cosmos Institute</a> is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund fast prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for free essays and updates:</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Model and the Tree]]></title><description><![CDATA[Can happiness be optimized?]]></description><link>https://blog.cosmos-institute.org/p/the-model-and-the-tree</link><guid isPermaLink="false">https://blog.cosmos-institute.org/p/the-model-and-the-tree</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Fri, 23 Jan 2026 15:03:47 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Iq4b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0e2845a-2e7c-49e6-bb7e-52d3af358678_1600x1004.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Iq4b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0e2845a-2e7c-49e6-bb7e-52d3af358678_1600x1004.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Iq4b!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0e2845a-2e7c-49e6-bb7e-52d3af358678_1600x1004.png 424w, https://substackcdn.com/image/fetch/$s_!Iq4b!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0e2845a-2e7c-49e6-bb7e-52d3af358678_1600x1004.png 848w, https://substackcdn.com/image/fetch/$s_!Iq4b!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0e2845a-2e7c-49e6-bb7e-52d3af358678_1600x1004.png 1272w, https://substackcdn.com/image/fetch/$s_!Iq4b!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0e2845a-2e7c-49e6-bb7e-52d3af358678_1600x1004.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Iq4b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0e2845a-2e7c-49e6-bb7e-52d3af358678_1600x1004.png" width="1456" height="914" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c0e2845a-2e7c-49e6-bb7e-52d3af358678_1600x1004.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:914,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Iq4b!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0e2845a-2e7c-49e6-bb7e-52d3af358678_1600x1004.png 424w, https://substackcdn.com/image/fetch/$s_!Iq4b!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0e2845a-2e7c-49e6-bb7e-52d3af358678_1600x1004.png 848w, https://substackcdn.com/image/fetch/$s_!Iq4b!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0e2845a-2e7c-49e6-bb7e-52d3af358678_1600x1004.png 1272w, https://substackcdn.com/image/fetch/$s_!Iq4b!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0e2845a-2e7c-49e6-bb7e-52d3af358678_1600x1004.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Two Men Contemplating the Moon</em> by Caspar David Friedrich (1819)</figcaption></figure></div><p>There are two things that I rarely take off: my wedding ring and my Garmin watch. The former stays on because it seldom crosses my mind to remove it. To tell the truth, I&#8217;m not even sure I could take it off without the right combination of soapy water, elbow grease, and patience. The latter, the Garmin, stays put because I have a regrettable obsession with bodily metrics. I like to know my heart rate, stress levels, and especially the state of my body battery (a wildly inaccurate read about how much fatigue I ought to be feeling at any given moment).</p><p>I&#8217;m not one of those life-hacking optimization types, but I wear the watch all the same. It helps me track runs and workouts, which I review every day or two when I lack enthusiasm for another trip to the gym. People often tell me my preoccupation with these figures probably makes me more stressed. They might be right. Nonetheless, I keep using it because I worry that without constantly reviewing breaths-per-minute or VO<sub>2</sub><sup> </sup>max I&#8217;d be tempted to call time on this wellness thing.</p><p>At least in my imagination, fitness &#8211; and so Garmin &#8211; is a part of my personal quest for <em>eudaimonia</em> (usually translated as &#8216;flourishing&#8217; or &#8216;happiness&#8217;). <em>Eudaimonia</em> is less a feeling or a state than the activity of living in accordance with virtue. I think that fitness helps me become someone who&#8217;s disciplined and fairly healthy, and that&#8217;s the kind of person I want to be. In this light, the question that my smartwatch poses is an uncomfortable one: can flourishing be optimized for or must it be cultivated from within?</p><p>Garmin&#8217;s algorithms are neither sophisticated nor totalizing. They stay in their lane and have a rather narrow set of powers of suggestion. They don&#8217;t tell me how to exercise anymore than a pencil tells me what to write. Others, like recommender systems or large language models, aren&#8217;t so reserved. They suggest where to go, what to have for breakfast, what to watch on TV, and even how to broach a difficult subject with someone we care about. At the limit, these systems may make all of our decisions for us in a scenario we call &#8220;<a href="https://bigthink.com/sponsored/philosopher-builder/">autocomplete for life</a>.&#8221;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe to Cosmos Institute for updates including opportunities, content, and programs</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>The Wood from the Trees</h3><p>In <em>On Liberty </em>Mill tells us something about the human condition: &#8220;Human nature is not a machine to be built after a <strong>model</strong>, and set to do exactly the work prescribed for it, but a <strong>tree</strong>, which requires to grow and develop itself on all sides, according to the tendency of the inward forces which make it a living thing.&#8221; For Mill, human flourishing is developmental. It cannot be achieved through external optimization because the process of growth is essential for the good life. This is why he describes human nature as a tree. The metaphor captures the idea that we grow from latent capacities and inclinations that must be discovered and shaped through self-directed action. Those tendencies suggest paths, but they require autonomy to be actualized.</p><p>In the <em>Nicomachean Ethics</em>, Aristotle tells us that flourishing flows from virtue and virtue through habituation. Rather than asking &#8220;what should I do?&#8221; he encourages us to ask &#8220;what kind of person should I become?&#8221; Grounded in the way people act in the real world, Aristotle&#8217;s approach to ethical life focuses on character development over rule-following or achieving the greatest good for as many people as possible. His &#8220;golden mean&#8221; (finding virtue by balancing extremes) requires <em>phronesis </em>or practical wisdom, the capacity to discern the right action in particular circumstances developed only through living in the world.</p><p>For both Mill and Aristotle, human flourishing is a journey we make every day. Try as we might, there are no shortcuts we can take. The activity of living a self-directed life can never be automated because it is constitutive of what it means to flourish in the first place. A life in which all the right choices were made for you might be pleasant, even enviable, but it would not be yours in the sense that matters. This belief, that flourishing cannot be optimized from the outside, is based on three separate but related ideas: (1) flourishing is a process of being; (2) it requires careful self-authorship; and (3) it must be maintained solely from within. An optimized outcome can simulate contentment, but it cannot substitute for active exercise or resilient capacities.</p><h3>Go Forth and Optimize</h3><p>Machine learning systems are trained by optimizing a loss function, but in deployment most function as pattern-completion engines rather than goal-directed agents. When these systems engage humans, they typically optimize for an external objective (e.g. engagement, throughput, or compliance) and treat the user as the site of optimization rather than its author. Even if the objective was human flourishing, we would remain the patient of optimization rather than its architect. The technical apparatus assumes that a desirable end-point can be specified in advance so that the system constructs outcomes rather than growing them. That is unproblematic for the most part &#8211; it&#8217;s just how the technology functions &#8211; but it begins to surface some curious problems when it comes into contact with our efforts to live deliberately.</p><p>You can see this dynamic in play with recommender systems that suggest what you want before you have had a chance to reflect on it. These systems learn from aggregate behavior to predict and shape individual choices, like suggesting the optimal route or by pushing you towards television shows or music that &#8220;someone like you&#8221; ought to like. A kind of overfitting to past behavior can <a href="https://blog.cosmos-institute.org/p/what-you-want-to-want">restrict users to previously expressed interests</a>, generating recommendations that whittle away the variety of options needed to recognize the opportunity costs of choice.</p><p>Core to this process is the &#8220;nudge&#8221; wherein the presentation of choices is shaped by an outside actor. Unlike earlier static nudges (such as placing healthy food at eye level), AI-driven nudges operate continuously and adaptively. Given enough scale, this transforms what was once soft paternalism into something closer to soft totalitarianism as fine-grained personalization makes interventions harder to collectively resist. The more criteria are centrally set, the greater the attack surface for those who would exploit them, and the more these systems inhibit the <a href="https://arxiv.org/abs/2504.18601">decentralized adaptive learning</a> through which individuals and societies discover what works. When the environment is engineered to produce predictable choices, the capacity to exercise choice becomes harder to sustain.</p><p>A common <a href="https://compass.onlinelibrary.wiley.com/doi/10.1111/phc3.12658">defense</a> of nudges holds that they preserve freedom of choice while improving outcomes for society as a whole. Proponents suggest that nudges influence behavior mostly for our benefit, and that they are easy to avoid if we put our mind to it. If people can always choose otherwise, their autonomy is not compromised. The rub is that true autonomy deals with at least both first-order choices (I want a cigarette) <em>and</em> second-order reflection on those choices (I wish I could stop smoking). Nudges often bypass reflection &#8211; how many of us are content with taking the first movie recommendation we see? &#8211; preserving formal freedom of choice while undermining the authorship that makes them our own. Defenders of nudges might say something like &#8220;if someone&#8217;s going to unthinkingly pick the first movie, it might as well be a good one.&#8221; But this line of thinking doesn&#8217;t engage with the fact that nudges make deliberation harder in the first place. When we engage in critical reflection, we weigh alternatives and formulate our own reasons about what to do. That process is harder to initiate when a &#8220;good enough&#8221; option is presented to us that bypasses the need for deliberation.   </p><p>More than that, the idea that nudges are easy to avoid fails given enough altitude. Even granting that individual nudges may be possible to avoid, like walking past that salad bar and opting for a cheeseburger, an ambient ecology of personalized nudges is not. This gets at one of the curious aspects of autonomy: for such a personal thing, it depends on social life and the opportunities to contest and compare it brings with it. Autonomy fails when exit is nominal or when commitments become irreversible. Of course all choices exist within a constellation of potential options, but it is possible in principle to structure those potentialities through systems that keep inquiry distributed rather than centrally engineered.</p><p>Another critique of the developmental view of human nature (that is, Mill&#8217;s tree) suggests that the problem is not optimization per se but poorly specified objectives. If we could align AI systems with genuine human goods, including autonomy, then optimization might in principle be made compatible with human flourishing. One influential proposal, associated with AI researcher Stuart Russell, introduces a simple framework based on three related principles: (1) the AI&#8217;s only objective is to maximize the realization of human values; (2) the AI is initially uncertain what those values are; and (3) human behavior is the primary source of information about values.</p><p>But if flourishing really is an activity rather than a state, then it may not be the kind of thing that can be maximized by an external agent. Like happiness in Aristotle&#8217;s account, it might simply be a good that must be exercised from within. In practical terms, learning values from behavior must grapple with its own circularity. If current behavior reflects preferences already shaped by prior optimization, the system is learning from wants that we haven&#8217;t fully endorsed.</p><p>More fundamentally, observed behavior may be the wrong place to look. Conduct reveals who we have been, not who we are trying to be. The developmental view suggests that we are always undergoing growth, and the distance between our actions and our aspirations is where character development takes place. No amount of behavioral data, however rich, can capture what we have not yet become. That said, Russell&#8217;s humility principle points toward something important. It reminds us that systems that surface uncertainty and return judgment to the user are more autonomy-preserving than those that do not.</p><p>Another problem deals with adequately specifying preferences for AI systems in line with the developmental view. Current models focus on <em>revealed preferences</em> (what we do) in service of a goal that they set (maximize time on platform in the case of recommendation engines or assist in a &#8220;<a href="https://ecorner.stanford.edu/wp-content/uploads/sites/2/2024/02/helpful-honest-harmless-ai-entire-talk-transcript.pdf">helpful, honest, and harmless</a>&#8221; manner for language models). Simply telling the system what you want to want isn&#8217;t enough; these <em>stated preferences </em>may not necessarily reflect the kind of person you are trying to be. Deeper approaches to character development will be needed for systems that help us realize our potential.  </p><p>A final objection to the developmental reading of human nature might stress that all technology assists human action. Writing supports memory and calculators aid arithmetic (though overreliance can enfeeble both). If external assistance inherently undermines autonomy, shouldn&#8217;t we be wary of tools altogether? But the difference between AI and other technologies turns on whether assistance extends the agent&#8217;s deliberation or replaces it. A map or pencil extends my autonomy, a chatbot that makes decisions for me does not.</p><h3>The Constant Gardener</h3><p>Am I using the system to pursue ends I have reflectively endorsed within, or is the system shaping my ends from without? Even when you supply preferences, a system must decide how act on them according to its own <a href="https://www.anthropic.com/news/claude-new-constitution">constitution</a>. If we have little ability to override those rules, well-meaning efforts at governance may inadvertently prevent systems from supporting our personal quests for <em>eudaimonia. </em>Consider a recommendation system that learns my preferences from behavior and serves content to maximize engagement. The system optimizes for my revealed preferences. But there&#8217;s no reason a recommender system needs to operate this way. It&#8217;s easy enough to build a system that allows users to think carefully about the type of person they would like to become by using the same technology. The former might serve you reality TV because you watched a cooking show; the latter might populate your feed with guides for cooking dishes you&#8217;ve always wanted to make.</p><p>This is better, but even this kind of second order endorsement is insufficient. If you tell a system the kind of things you want to be interested in, then you may find that the system quickly optimises for them in a way that does little to help you grow as a person. Watching more arthouse cinema and fewer Netflix originals has little bearing on the type of life you want to live. Instead, systems ought to deal with the stuff that matters. Personal agents, for example, could be designed to sift interests from from character development. They could help us think about who we want to become rather than engineering the outcomes we think we want.</p><p>In practice, that might mean an AI assistant that makes its influence legible. Rather than covertly curating your information environment, a character-supporting agent might show you the paths not taken. It might introduce the unfamiliar because growth requires encounters beyond the experiences you&#8217;ve already had. At minimum, such systems should make exit realistic through portability and contestability and ensure we sign off any changes that may influence our higher order commitments. Rather than simply checking whether your goals have changed, they could surface tensions between your commitments to help you live more deliberately. </p><p>Optimizing flourishing from the outside is a non-starter because flourishing is active and authored. But if the locus of deliberation remains with us, perhaps AI can help us live a little more wisely.</p><div><hr></div><p><em><a href="https://cosmos-institute.org/">Cosmos Institute</a> is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund fast prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for updates and essays:</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Same Radio, Different Citizens]]></title><description><![CDATA[On the Economics of Human Formation]]></description><link>https://blog.cosmos-institute.org/p/same-radio-different-citizens</link><guid isPermaLink="false">https://blog.cosmos-institute.org/p/same-radio-different-citizens</guid><dc:creator><![CDATA[Brendan McCord]]></dc:creator><pubDate>Fri, 16 Jan 2026 15:07:53 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!XF18!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e4013c1-e72c-477d-a5b0-1998c39f137f_1620x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This post is co-authored by Brendan McCord (Cosmos Institute) and Philipp Koralus (University of Oxford). </em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XF18!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e4013c1-e72c-477d-a5b0-1998c39f137f_1620x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XF18!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e4013c1-e72c-477d-a5b0-1998c39f137f_1620x1080.png 424w, https://substackcdn.com/image/fetch/$s_!XF18!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e4013c1-e72c-477d-a5b0-1998c39f137f_1620x1080.png 848w, https://substackcdn.com/image/fetch/$s_!XF18!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e4013c1-e72c-477d-a5b0-1998c39f137f_1620x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!XF18!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e4013c1-e72c-477d-a5b0-1998c39f137f_1620x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XF18!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e4013c1-e72c-477d-a5b0-1998c39f137f_1620x1080.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2e4013c1-e72c-477d-a5b0-1998c39f137f_1620x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XF18!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e4013c1-e72c-477d-a5b0-1998c39f137f_1620x1080.png 424w, https://substackcdn.com/image/fetch/$s_!XF18!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e4013c1-e72c-477d-a5b0-1998c39f137f_1620x1080.png 848w, https://substackcdn.com/image/fetch/$s_!XF18!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e4013c1-e72c-477d-a5b0-1998c39f137f_1620x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!XF18!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e4013c1-e72c-477d-a5b0-1998c39f137f_1620x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Nicholas Roerich, The Way to Shambhala (1933)</figcaption></figure></div><p>Radio arrived in the early twentieth century as pure possibility. Electromagnetic waves carried human voices across continents. Within decades, three versions of the same technology had helped produce three different kinds of citizen.</p><p>In the 1930s and 1940s, the British Broadcasting Corporation (BBC) built programming that assumed its listeners wanted to become more than they were, with challenging content and genuine debate, designed to stretch rather than soothe. The funding model made this possible: residents paid a license fee for all BBC programming regardless of what they watched. No one had to maximize time-on-station. They could pursue something else.</p><p>American commercial radio faced a different pressure: advertising. Audiences were larger when programming demanded less. The market cleared at entertainment. What sold ads was what held attention, and what held attention was not what elevated the listeners.</p><p>Soviet radio under Stalin did not bother with the pretense of serving listeners at all. The technology became an instrument of state narrative designed to bluntly manufacture the appearance of consensus and compress the space for independent thought.</p><p>The same transmitters and waveforms propagating at the speed of light formed the technical basis in all three cases, but divergent selection pressures yielded divergent equilibria. The funding models were de facto governance regimes, entangled with the political order, shaping what could be said and what could be heard.</p><p>Looking back now at the introduction of the radio, it is easy to see that the question of whether &#8220;the radio&#8221; is conducive to the good or whether it advances human autonomy is hard to meaningfully address without considering funding models. Yet whenever new technology is deployed, public discourse tends to fall back to discussing the technology <em>in itself</em> without reference to its economic implementation. For example, people are largely happy to discuss the ills and benefits of social media, and even legislate on that basis, as if social media could be evaluated in a test tube for carcinogens in the way we might evaluate cigarettes.</p><p>Technology only defines a possibility space: the affordances, constraints, default pathways, and scaling properties that structure our interaction with the world around us. Institutional choices and funding models determine what position in that space becomes reality. Moreover, perhaps more importantly, institutional choices and funding models determine what position can remain in equilibrium over time. An institutional aim to pursue a particularly virtuous part of technological possibility space cannot be maintained if the funding model forces the institution over time to either change the aim or die.</p><p>Consider modern recommender systems. The possibility space is vast: systems that surface what users would endorse on reflection, systems that maximize time-on-site, or systems that optimize for learning or serendipity or connection. But the economics of advertising-funded platforms select for a narrow band of that space: the band where engagement can be monetized regardless of whether it tracks anything users actually value. The narrowing isn&#8217;t unique to ads; ads are just the clearest case where the reward proxy is orthogonal to reflective value.</p><p>Billions of minds are routed by recommender systems daily. The unexplored regions of the surrounding possibility space are not technologically inaccessible, but they are economically unselected. We think of economics here broadly: all inputs required for a system to carry on. Even a non-profit social media site endowed in perpetuity or supported by the state would still need to capture enough attention to remain &#8220;social&#8221; in any meaningful sense. The distinctions we&#8217;re drawing are orthogonal to narrow critiques of capital.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe to Cosmos Institute for updates including opportunities, content, and programs</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>What Institutions Compute</h3><p>The cognitive scientist David Marr proposed a simple framework for how to analyze an information processing system, regardless of whether it is a human brain or an AI system. He argued that there are three families of questions that need to be answered to have a full understanding of such a system, which he thought of as three levels of analysis.</p><ol><li><p><strong>Aim:</strong> What is the system trying to compute? What problem is it solving?</p></li><li><p><strong>Mechanism:</strong> What procedures does it use? What&#8217;s the algorithm?</p></li><li><p><strong>Substrate:</strong> What physical substrate implements those procedures? What&#8217;s the hardware?</p></li></ol><p>Marr observed that each of these questions has some independence. What you are trying to compute does not fully determine your algorithm or your hardware, and so on. Because of this, we can study facts about aims, mechanisms, and substrates somewhat separately. Yet, Marr also observed that how we answer one family of questions constrains how we get to answer the others. You cannot simply declare that a system has some aim. The aim must be achievable by some procedure, and the procedure must be realizable in the substrate that is available. If the substrate of your system does not support any mechanism that could achieve your favorite aim, that aim can&#8217;t be part of the right explanation of what the system does.</p><p>This framework transfers to institutions. A company and tech products are also information-processing systems, that can be understood through three analogous families of questions:</p><ol><li><p><strong>Aim: </strong>What is the organization trying to achieve? What problem is it solving?</p></li><li><p><strong>Mechanism: </strong>What procedures does it use? What are the routines, metrics, and product mechanics?</p></li><li><p><strong>Substrate: </strong>What implements those procedures? What is the shape of structures like revenue, ownership, incentives, user participation, survival pressures?</p></li></ol><p>As in Marr&#8217;s case, we get constraints across levels of questions: the lower levels shape what&#8217;s possible at the higher ones. A founder might genuinely aim for VC-funded virtue-in-a-box. But if the company&#8217;s survival makes sense over time, the avowed aim is unlikely to remain what best explains its behavior. The substrate constrains what aims are stable. The substrate does not uniquely determine aims, but it sets the boundary of the viable.</p><p>Consider a game studio that sets out to build something meaningful. Let&#8217;s say the aim is &#8220;earned mastery.&#8221; What earned mastery requires is not just skill at the game, but the formation that comes from genuine challenge: learning to lose, to persist, to improve through practice. The mechanisms of the game serve that aim, with matchmaking, difficulty curves, and progression that rewards effort. The economics are aligned. Players pay once, and the only path to power is getting better. Aim, mechanism, substrate, all pointing the same direction.</p><p>Then growth becomes the mandate. Loot boxes convert well. Players want to feel powerful now, not after fifty hours. As a result, mechanics emerge that let money substitute for skill.</p><p>No one at the game studio explicitly decided to abandon the mission. Many probably saw where things were heading. But each choice was locally rational. What got measured became what mattered; what mattered became what survived; what survived became the real aim. The specter of Goodhart&#8217;s law looms large and the feedback loop doesn&#8217;t care about the pitch deck. It optimizes for what it can see. By the end, &#8220;earned mastery&#8221; appears in the mission statement and nowhere else. The aim became a ghost. Not abandoned, just no longer part of the best explanation of the company as a system, or the best model for decisions within it.</p><p>A distinction matters here. Engagement that tracks value (e.g., mastery, truth, community, or genuine satisfaction) is what good products <em>should</em> generate. The problem is engagement engineered through compulsive behavior, asymmetry, and the exploitation of human weakness. In the worst case, we substitute habit for growth, money for skill, manufactured urgency for actual importance.</p><p>Returning to radio, we might say that the BBC avoided structural drift in its heyday because license fees created no pressure to hold attention for advertisers. The economics supported the stated mission rather than corroding it. American commercial radio found other priorities because advertising revenue required capturing attention and capturing attention rewarded different mechanisms than cultivating it. Those mechanisms eventually became the point. This is not a case for license fees as such&#8212;the BBC subsequently developed other pathologies, and any funding model creates its own selection pressures. The point is the structural logic, not the specific policy.</p><p>The framework we described is analytically neutral. It tells you how aims, mechanisms, and economics can be articulated independently yet alerts you to how they constrain each other in real systems, particularly as they persist over time. It doesn&#8217;t tell you what aims are worth pursuing. You could use it to help design effective propaganda as easily as effective education.</p><p>Every information environment is a training regime: it determines what people practice noticing, what they practice ignoring, and therefore what they become capable of judging. The institution that shapes attention is in turn shaping the citizen.</p><p>Descriptive frameworks are useful if they help us understand systems, and more useful when they help us articulate what systems we want. When we say the BBC &#8220;cultivated&#8221; and commercial radio &#8220;captured,&#8221; we are claiming that some outcomes are better than others. Specifically, preserving and developing human judgment matters because judgment is the foundation of autonomy. Systems that cultivate judgment expand what a person can do and be. Systems that atrophy judgment reduce the agent to a consumer of impulses by executing preferences they never made their own and optimizing for ends they never deliberately chose.</p><p>If we are going to reason about institutional design at all, we need to be explicit about what we are designing <em>for</em>. That is a prior question, and ignoring it just means your values operate without scrutiny. For our part, we take human autonomy as our prior commitment.</p><p>From this perspective, the question for any technology that touches human judgment is not &#8220;is this technology good or bad?&#8221; The question is: in what institutional arrangements is the technology implemented, and do those arrangements create structural pressure toward or away from the cultivation of human judgment?</p><p>In practice, this means asking:</p><ul><li><p><em>What aim does the economic structure sustain over time?</em> The mission statement or the aim that survives contact with the feedback loops?</p></li><li><p><em>What mechanisms does that aim require?</em> Do the proposed mechanisms only work in a vacuum to deliver your aim, or could they stably work to deliver your aim given the economics that you are prepared to implement?</p></li><li><p><em>What do those mechanisms do to the humans who use them?</em> The effect you designed for, or the effect at equilibrium?</p></li></ul><p>Many companies cannot answer these questions honestly. The mission statement says one thing; the incentive structure produces another. The founders may believe in the stated aim; the economics select for something else.</p><p>If autonomy is the commitment, we can ask what design criteria this would entail. One way to proceed is then to articulate those criteria as constraints or tests. We suggest two tests below as minimal starting points:</p><p><em><strong>1. The Transparent Choice Test:</strong></em> Could this product survive in a suitably nearby world of users who fully understand how the product works and could easily select an alternative?</p><p>A product fails this test if its economics depend on the gap between informed choice and actual behavior: it <em>only</em> works because users don&#8217;t fully understand it or can&#8217;t easily leave. Note that the test brackets market power. Whether it is easy to switch in the actual world is separable from whether people would switch if they could, in a suitably nearby world. A virtuous product might still be a monopoly; a monopoly might still make a good product. Whether monopolies are bad for other reasons is a separate question.</p><p><em><strong>2. Th</strong></em><strong>e </strong><em><strong>Candid Aim Test: </strong></em>Is the stated aim key to the best explanation for how the system actually behaves and how its parts are put together?</p><p>If the stated aim is real, reference to it should make the system&#8217;s behavior intelligible and more predictable. It&#8217;s a red flag when a different aim explains more, particularly so if it is an aim that is not just orthogonal but contrary to the stated aim. Consider a supermarket with a cash register that makes subtle addition errors at the threshold of detection, always in the store&#8217;s favor. If this has been going on for years, &#8220;fair dealing&#8221; stops being a plausible aim of the store. Or consider a recommender system that reliably foments polarization. At some point, &#8220;building community&#8221; explains less than &#8220;maximizing clicks.&#8221;</p><p>Neither the Transparent Choice Test nor the Candid Aim Test is passable long-term if the economics pull against the aim.</p><h3>Structure, Not Will</h3><p>Structural drift cannot be resisted at the level of individual will. A founder with integrity, operating within a misconfigured stack, will be selected against by the institutional arrangements that structure their team, product, and market. If you have spent years inside one of these companies advocating for users over engagement and eventually burned out, it wasn&#8217;t a failure of <em>will</em>. The economics select for certain equilibria and not others. You can resist that gradient for a while, but you cannot reverse it through conviction alone.</p><p>The intervention must be structural, not only personal. This is the work of what we call philosopher-builders: configuring the stack itself.</p><p>A skeptic will ask: if a firm adopts constraints that reduce short-term competitiveness, won&#8217;t the market select it out? Isn&#8217;t anything outside of profit maximization founder self-indulgence? But this assumes profit is already defined before we&#8217;ve chosen what game we&#8217;re playing. The time horizon, the measurement regime, and what gets externalized are not givens. The builder&#8212;the philosopher-builder&#8212;chooses them.</p><p>Some commitment devices reduce competitiveness while others become the competitive advantage. Substack&#8217;s bet on writer ownership creates lock-in through loyalty rather than switching costs. The Bloomberg terminal succeeds not despite its commitment to decision-quality over engagement, but because of it. Patagonia&#8217;s environmental constraints built a brand that commands premium pricing.</p><p>Not every structure survives in every market. Some commitment devices only work with certain capital structures, or in markets where trust is differentially valuable, or with customers who can recognize quality. That&#8217;s precisely why governance, capital philosophy, and culture matter. The economics of the endeavor are not given. They are chosen&#8212;by founders, by investors, by the people who set the constraints within which everyone else optimizes.</p><p>This reframes what it means to work on technology that matters.</p><p>The question is not whether you&#8217;re an engineer or an executive. The question is whether you&#8217;re operating at the level of institutional design going all the way from formulating aims to economic implementation, or only at the level of features. Both matter, but the institutional level in this broad sense is prior. It determines what the feature level can achieve.</p><p>An engineer who designs a brilliant recommendation system but ignores the incentive structure it operates within has ceded the most important decisions to someone else. An engineer who understands how funding models shape optimization targets, how metrics become attractors, how governance locks in or erodes mission is operating at the level where outcomes are actually determined.</p><p>Every investment thesis contains an implicit theory of what is of value. Every term sheet is a design document for what will be optimized. Every board seat carries influence over what gets optimized. These decisions ripple outward into mechanisms, and mechanisms ripple outward into the lives of everyone who uses what gets built.</p><p>The question is whether this influence gets exercised deliberately or by default.</p><p><em>Default</em> means optimizing for what&#8217;s measurable, what&#8217;s familiar, what&#8217;s already worked. <em>Default </em>means engagement metrics and growth curves and the patterns that produced returns last cycle.<em> Default</em> means structural drift toward whatever the economics select for, regardless of stated intentions.</p><p><em>Deliberate</em> means asking the three questions. It means applying the Transparent Choice Test and the Candid Aim Test. It means designing economic structures that can sustain aims worth pursuing, and locking those structures in before the drift begins.</p><p>At the present stage of AI development, this framework is not an abstract lens but a live description of the game being played. Frontier systems are expensive to build and are increasingly financed by capital invested on roughly the thesis that the technology will ultimately be applicable to almost every aspect of the economy. In that sense the capital is not buying a product so much as underwriting a general-purpose substrate and hoping to later discover the dominant interfaces through which it captures value. That option-like thesis makes the &#8220;aim&#8221; unusually plastic: the same underlying model can become tutor, co-worker, companion, bureaucrat, sales engine, surveillance layer, or propaganda instrument. Which of those becomes stable is a property of the institutions, financing models, and culture that wrap it.</p><p>Because the return story is &#8220;almost everything,&#8221; the equilibrium remains underdetermined. Selection pressure is already operating, but often through indirect channels: control of distribution, accumulation of compute, narratives that justify scale, the ability to externalize risk, and the power to set default contracts and defaults-of-use.</p><p>Different capital structures will stabilize different aims. Usage-based pricing tends to reward systems that are instrumentally useful; subscription models reward systems people would still choose under conditions of autonomy more than attention-economy models do; enterprise procurement tends to reward auditability and legible governance; state funding tends to reward control.</p><p>The degree of freedom is real precisely because the equilibrium has not yet fully formed. Decisions made at this pivotal moment by technologists, founders, and investors can still dramatically shape outcomes we care about.</p><p>The attention economy shows what happens when the substrate hardens and aims collapse into a single attractor. If revenue is selling predictable attention to advertisers, then maximizing attention capture becomes the stable aim, engagement proxies become the mechanism, and surveillance plus lock-in becomes the substrate that keeps the loop from breaking. The equilibrium is narrow and self-reinforcing, and the possibility space remains technologically available but economically unselected.</p><p>Frontier AI is not yet locked into that attractor, yet it could inherit it quickly if general-purpose models are primarily deployed as next-generation attention routers. If autonomy is the commitment, this is exactly the moment when financing, ownership, and governance choices matter most: before the underdetermined equilibria congeal into infrastructure and the aim becomes a ghost again.</p><p>What got built around radio in the twentieth century shaped what kind of citizens could emerge. What gets built around AI now will do the same, at greater scale and speed.</p><p>The window is still open. Which citizens are you building for?</p><div><hr></div><p><em><a href="https://cosmos-institute.org/">Cosmos Institute</a> is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund fast prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for updates and essays</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[A Letter, Year End 2025]]></title><description><![CDATA[On AI, human autonomy, and keeping the flame of freedom alive]]></description><link>https://blog.cosmos-institute.org/p/a-letter-year-end-2025</link><guid isPermaLink="false">https://blog.cosmos-institute.org/p/a-letter-year-end-2025</guid><dc:creator><![CDATA[Brendan McCord]]></dc:creator><pubDate>Fri, 09 Jan 2026 21:11:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UHe5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b673f34-cc88-49d6-a3a7-10455585dafb_1566x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Nothing in history has given us more leverage. Nothing has made it easier to stop thinking for ourselves.</p><p>This year I watched my four- and six-year-old flourish at Alpha School, where AI personalizes their learning and frees time for what machines cannot supply: curiosity, character, play. I saw AI systems produce new mathematics, reshape creative work, and hand ordinary people powers that once required armies of engineers.</p><p>And yet I&#8217;ve noticed something in myself I don&#8217;t much like. The readiness to accept an AI&#8217;s first draft of an email, a plan, a decision, because it&#8217;s faster, smoother, and good enough. An invisible waltz in which I take one step and then forget that I am no longer leading.</p><p>In many domains I&#8217;ve welcomed the flow and eagerly entered the new paradigm. In others I&#8217;ve pushed back: the forming of beliefs, the making of judgments, the hard labor of learning to think at all. For the deepest questions, what it means to live well, I&#8217;ve tried to refuse it entirely.</p><p>The more I talked with others this year, the more I understood this was not my private neurosis. Something is shifting. People feel it, even when they cannot name it.</p><p>At our seminars with Oxford, Aspen, Liberty Fund, St. John&#8217;s College, DeepMind, Palantir, and Microsoft, and at our AI for Truth-seeking Symposium with FIRE, a pattern declared itself: the people most anxious about preserving human judgment are often those building the systems. They see how capable these tools are becoming, and they are asking questions the rest of us would prefer to postpone.</p><p>At Anthropic&#8217;s headquarters in San Francisco, looking out over the city, co-founder Jack Clark told me he journals more than ever, writing down his thinking before consulting any AI. He knows the outputs are better with the machine. What he wants to know is whether he is still developing. This is the difference between effectiveness and judgment.</p><p>Technology that expands what is possible may narrow <em>who</em> is possible. But it need not.</p><p>Adam Smith <a href="https://blog.cosmos-institute.org/p/the-artificial-spectator">understood</a> both halves of this. You become a good thinker by doing the thinking, badly at first, then less badly. The practice is <em>constitutive</em>. This is the best antidote I know against becoming a <a href="https://blog.cosmos-institute.org/p/the-claude-boys">Claude Boy</a>. And the kind of people who develop through their own judgment are precisely the kind we need if we are to sustain a free society.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UHe5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b673f34-cc88-49d6-a3a7-10455585dafb_1566x630.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UHe5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b673f34-cc88-49d6-a3a7-10455585dafb_1566x630.png 424w, https://substackcdn.com/image/fetch/$s_!UHe5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b673f34-cc88-49d6-a3a7-10455585dafb_1566x630.png 848w, https://substackcdn.com/image/fetch/$s_!UHe5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b673f34-cc88-49d6-a3a7-10455585dafb_1566x630.png 1272w, https://substackcdn.com/image/fetch/$s_!UHe5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b673f34-cc88-49d6-a3a7-10455585dafb_1566x630.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UHe5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b673f34-cc88-49d6-a3a7-10455585dafb_1566x630.png" width="1456" height="586" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4b673f34-cc88-49d6-a3a7-10455585dafb_1566x630.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:586,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1852541,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.cosmos-institute.org/i/183000652?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b673f34-cc88-49d6-a3a7-10455585dafb_1566x630.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UHe5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b673f34-cc88-49d6-a3a7-10455585dafb_1566x630.png 424w, https://substackcdn.com/image/fetch/$s_!UHe5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b673f34-cc88-49d6-a3a7-10455585dafb_1566x630.png 848w, https://substackcdn.com/image/fetch/$s_!UHe5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b673f34-cc88-49d6-a3a7-10455585dafb_1566x630.png 1272w, https://substackcdn.com/image/fetch/$s_!UHe5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b673f34-cc88-49d6-a3a7-10455585dafb_1566x630.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photos from Cosmos seminars and events in 2025</figcaption></figure></div><p>Twenty-five hundred years ago, Pericles stood before the mothers of Athenian dead and described what their sons died for: a city where citizens cultivate beauty and wisdom, debate openly before acting, and take responsibility for public life. A free society.</p><p>Since then, the principles of free societies have been transmitted through debates, texts, and institutions. We are entering an era where they must be embodied in code&#8212;or they will become inert. The philosophy-to-law pipelines of old have given way to philosophy-to-code.</p><p>Tocqueville saw the stakes: vigorous self-governance in which citizens grow through freedom or a softer servitude in which we gradually surrender it for comfort and convenience, becoming, in his phrase, &#8220;a flock of timid and industrious animals.&#8221; The choice is ours to make, while we still remember what it means to choose.</p><p>This is our civilizational moment. It requires people who can translate the principles of a free society into the actual systems and institutions being built.</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;c79a1429-8684-4fd4-b7be-f5520fcfc26e&quot;,&quot;duration&quot;:null}"></div><h2><strong>The philosopher-builder</strong></h2><p>We named the <a href="https://blog.cosmos-institute.org/p/the-philosopher-builder">philosopher-builder</a> archetype in July: the kind of person who, like Benjamin Franklin, combines philosophical reflection with practical wisdom and builds institutions that embody their deepest convictions about human flourishing.</p><p>I expected this to resonate with a few hundred people. Instead, some of the most impressive founders I know sent notes saying they&#8217;d finally found language for what they were trying to do. Researchers said it changed how they thought about their careers. What I came to understand is that the division between &#8220;thinkers&#8221; and &#8220;doers&#8221; has impoverished both, and many people had been quietly waiting for permission to be whole.</p><p><a href="https://blog.cosmos-institute.org/p/the-philosopher-builder">The philosopher-builder</a> is an answer to a transmission problem. Whereas the great reformers of old wrote pamphlets, today&#8217;s are writing code. They will give the principles of a free society technological expression: what autonomy looks like in API design, what decentralization means for architecture choices, what truth-seeking requires in model development.</p><p>This year <a href="https://x.com/IvanVendrov/status/1995887378909855844/video/1">Ivan Vendrov</a>, <a href="https://www.linkedin.com/posts/z-weinberg_when-my-dear-friend-brendan-mccord-told-me-activity-7401987356440027136-dui6?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAzTBosBCd9xUylHWmiGndwl4t9kTE3fFS8">Zoe Weinberg</a>, <a href="https://x.com/sebkrier/status/1995883232244731942/video/1">S&#233;b Krier</a>, <a href="https://drive.google.com/file/d/1LyoGVhrBYOPGJ-1C5N7ncUs11ZsQH27Q/view?usp=drive_link">Jason Zhao</a>, <a href="https://x.com/komorama/status/1945539408444408072/video/1">Alex Komoroske</a>, <a href="https://x.com/lisawehden/status/1946266915632283975/video/1">Lisa Wehden</a>, and <a href="https://x.com/joelbot3000/status/1945566607914537370/video/1">Joel Lehman</a> all spoke about their work through this lens. We published <a href="https://blog.cosmos-institute.org/p/philosopherbuilder-summer-reads-2025">Summer</a> and <a href="https://blog.cosmos-institute.org/p/philosopher-builder-winter-reads">Winter reading lists</a> drawing on recommendations from across our network. And we started to see a community coalesce around the conviction that philosophy and building aren&#8217;t separate activities&#8212;they&#8217;re the same activity, done with different hands.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="http://linkedin.com/posts/z-weinberg_when-my-dear-friend-brendan-mccord-told-me-activity-7401987356440027136-dui6?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAzTBosBCd9xUylHWmiGndwl4t9kTE3fFS8" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Apgj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a0532ed-7b89-4633-93aa-f064272fe608_987x575.png 424w, https://substackcdn.com/image/fetch/$s_!Apgj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a0532ed-7b89-4633-93aa-f064272fe608_987x575.png 848w, https://substackcdn.com/image/fetch/$s_!Apgj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a0532ed-7b89-4633-93aa-f064272fe608_987x575.png 1272w, https://substackcdn.com/image/fetch/$s_!Apgj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a0532ed-7b89-4633-93aa-f064272fe608_987x575.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Apgj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a0532ed-7b89-4633-93aa-f064272fe608_987x575.png" width="987" height="575" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8a0532ed-7b89-4633-93aa-f064272fe608_987x575.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:575,&quot;width&quot;:987,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:805456,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;http://linkedin.com/posts/z-weinberg_when-my-dear-friend-brendan-mccord-told-me-activity-7401987356440027136-dui6?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAzTBosBCd9xUylHWmiGndwl4t9kTE3fFS8&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.cosmos-institute.org/i/183000652?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a0532ed-7b89-4633-93aa-f064272fe608_987x575.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Apgj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a0532ed-7b89-4633-93aa-f064272fe608_987x575.png 424w, https://substackcdn.com/image/fetch/$s_!Apgj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a0532ed-7b89-4633-93aa-f064272fe608_987x575.png 848w, https://substackcdn.com/image/fetch/$s_!Apgj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a0532ed-7b89-4633-93aa-f064272fe608_987x575.png 1272w, https://substackcdn.com/image/fetch/$s_!Apgj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a0532ed-7b89-4633-93aa-f064272fe608_987x575.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em><a href="https://x.com/IvanVendrov/status/1995887378909855844">Ivan</a>, <a href="https://x.com/sebkrier/status/1995883232244731942">Seb</a>, and <a href="http://linkedin.com/posts/z-weinberg_when-my-dear-friend-brendan-mccord-told-me-activity-7401987356440027136-dui6?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAzTBosBCd9xUylHWmiGndwl4t9kTE3fFS8">Zoe</a> talking about their work and the philosopher-builder</em></figcaption></figure></div><h2><strong>What we built this year</strong></h2><p>Fourteen seminars, including with <a href="https://blog.cosmos-institute.org/p/2025-oxford-seminar-ai-x-philosophy">Oxford</a>, <a href="https://blog.cosmos-institute.org/p/the-aspen-institute-asks-will-ai">Aspen Institute</a>, <a href="https://blog.cosmos-institute.org/p/reading-list-ai-and-the-future-of">Liberty Fund</a>, <a href="https://blog.cosmos-institute.org/p/5-top-takeaways-ai-and-the-great">St. John&#8217;s College</a>, and Edge, bringing builders into conversation with Hayek and Polanyi on the use of knowledge in society, with Socrates and Mill on truth-seeking, with Smith on moral development.</p><p>On YouTube and other video channels, our interviews with 11 <a href="https://www.youtube.com/playlist?list=PL_xn3B6eWvGvzx2eZibi86mIwpNeohSXs">AI experts</a> and <a href="https://www.youtube.com/playlist?list=PL_xn3B6eWvGtx906TTsBVbXuntP4bNjhG">entrepreneurs</a> on their philosophical influences reached over a million subscribers. On Substack, our long-form essays on AI and philosophy reached over 17,000 readers.</p><p>And voices across our network produced pieces we keep returning to: Tyler Cowen on <a href="https://www.thefp.com/p/ai-will-change-what-it-is-to-be-human">how AI will change what it is to be human</a>, Jason Crawford&#8217;s <a href="https://rootsofprogress.org/manifesto/">Techno-Humanist Manifesto</a>, Caitlin Morris on <a href="https://blog.cosmos-institute.org/p/social-tinkering-why-collaborative">social tinkering</a>, Gavin Leech&#8217;s &#8220;<a href="https://press.stripe.com/scaling">The Scaling Era</a>&#8221; with Dwarkesh Patel, and Jack Clark&#8217;s &#8220;<a href="https://importai.substack.com/p/import-ai-438-cyber-capability-overhang">Silent Sirens, Flashing For Us All</a>.&#8221;</p><p>We grew our fast grants program to <a href="http://cosmosgrants.org/winners">140 builders and researchers</a>. Through initiatives including a <a href="https://eternallyradicalidea.com/p/will-ai-kill-our-freedom-to-think">$1M AI for Truth-seeking program</a>, many projects like <a href="https://substack.com/home/post/p-163416127">Campus</a>, <a href="https://szdt.dev/">Szdat</a>, <a href="https://www.kanonic.ai/">Kanonic</a>, and <a href="https://rossmatican.substack.com/p/authorship-ai-a-multi-agent-writing">Authorship</a> began to take shape.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://cosmosgrants.org/winners" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2qER!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05603bf-f82c-4fbe-92a8-a65d63d38e31_1254x546.png 424w, https://substackcdn.com/image/fetch/$s_!2qER!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05603bf-f82c-4fbe-92a8-a65d63d38e31_1254x546.png 848w, https://substackcdn.com/image/fetch/$s_!2qER!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05603bf-f82c-4fbe-92a8-a65d63d38e31_1254x546.png 1272w, https://substackcdn.com/image/fetch/$s_!2qER!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05603bf-f82c-4fbe-92a8-a65d63d38e31_1254x546.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2qER!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05603bf-f82c-4fbe-92a8-a65d63d38e31_1254x546.png" width="1254" height="546" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c05603bf-f82c-4fbe-92a8-a65d63d38e31_1254x546.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:546,&quot;width&quot;:1254,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:218860,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://cosmosgrants.org/winners&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.cosmos-institute.org/i/183000652?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05603bf-f82c-4fbe-92a8-a65d63d38e31_1254x546.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2qER!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05603bf-f82c-4fbe-92a8-a65d63d38e31_1254x546.png 424w, https://substackcdn.com/image/fetch/$s_!2qER!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05603bf-f82c-4fbe-92a8-a65d63d38e31_1254x546.png 848w, https://substackcdn.com/image/fetch/$s_!2qER!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05603bf-f82c-4fbe-92a8-a65d63d38e31_1254x546.png 1272w, https://substackcdn.com/image/fetch/$s_!2qER!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05603bf-f82c-4fbe-92a8-a65d63d38e31_1254x546.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>New: <a href="https://scholar.google.com/citations?hl=en&amp;user=jFNNMMgAAAAJ">Google Scholar</a> + <a href="http://cosmosgrants.org/winners">v1 public database</a> for Cosmos grantees and fellows</em></figcaption></figure></div><p>We supported 16 fellows working at the intersection of philosophical depth and frontier AI, many of whom did so at <a href="http://hailab.ox.ac.uk">Oxford HAI Lab</a>. This led to <a href="https://scholar.google.com/citations?hl=en&amp;user=jFNNMMgAAAAJ">34 research papers</a>, from &#8220;<a href="https://link.springer.com/article/10.1007/s11299-025-00326-z">The Philosophic Turn for AI Agents</a>&#8221; to &#8220;<a href="https://www.full-stack-alignment.ai/">Full Stack AI Alignment</a>&#8221; to &#8220;<a href="https://arxiv.org/abs/2512.02914">Martingale Scores</a>.&#8221;</p><p>Through our incubation fellowship, Samuele Marro launched a new non-profit called the <a href="https://x.com/idai_institute/status/1965059958287810908">Institute for Decentralized AI</a>. We backed 8 philosopher-builders to create new companies that take their ideas to world-changing scale. And together with IHS, we started funding AI tool use for scholars in philosophy and the humanities: researchers like <a href="https://blog.cosmos-institute.org/p/intelligence-environments">Kevin Vallier</a> and <a href="https://x.com/sethlazar/status/2003517007040512414">Seth Lazar</a> integrating these tools into serious intellectual work.</p><p>What matters more than numbers is the community that emerged: people who share a conviction that the principles of a free society must be translated into the systems we&#8217;re building, and that the window for doing so is shorter than most people realize.</p><h2><strong>What I don&#8217;t know</strong></h2><p>The practices I&#8217;ve developed personally feel almost monastic: blank-page journaling before consulting AI, deliberately choosing the harder cognitive path, regular reading groups. Preserving the independence, force, and originality that remains to us seems to require a level of intentionality we often tell ourselves we don't have time for.</p><p>I believe the classical liberal tradition has resources for this moment. But translation is hard: principles like truth-seeking and autonomy don't map easily onto architecture choices and agent interfaces. And even when the direction is clear, building well requires judgment that can only come from practice. The ideas and the formation have to come together.</p><p>What I&#8217;ve come to believe is that this practical wisdom has to be developed in community: through seminars, collaborations, and shared building. And probably through intensive multi-month human formation, which will require new thinking.</p><p>The philosopher-builder isn&#8217;t a solitary figure. The archetype only works if there are institutions that cultivate it. That&#8217;s what we&#8217;re trying to build.</p><h2><strong>Looking forward</strong></h2><p>This year, Harry Law and I are writing a book on what it takes to be a free agent in the AI age. I&#8217;m going to work closely with Philipp Koralus on research at Oxford&#8217;s Human-Centered AI Lab, and deepen connections with collaborators at the top labs as things move fast.</p><p>We are bringing this community together in person. More gatherings, more informal meetups, especially in Austin. And we&#8217;re training the next generation of philosopher-builders through a new format we&#8217;ll announce soon.</p><p>We&#8217;re hiring and will share roles soon. If our mission resonates, <a href="https://cosmosinst.typeform.com/to/RYDheabE?utm_source=xxxxx&amp;utm_medium=xxxxx&amp;utm_campaign=xxxxx&amp;utm_term=xxxxx&amp;utm_content=xxxxx">get in touch</a>.</p><p>The question I keep returning to is simple: What kind of people do we want to become in a world where thinking is optional? The answer will determine what we build.</p><p>The flame of a free society has been passed from debate to debate, text to text, institution to institution for 2,500 years. Now it needs to live in code.</p><p>To the donors, fellows, collaborators, and readers who made this year possible: </p><p>Thank you.</p><p>Brendan</p><div><hr></div><p><em><a href="https://cosmos-institute.org/">Cosmos Institute</a> is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund fast prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for updates and essays</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Faster Horses]]></title><description><![CDATA[Intelligence flows from systems and singletons]]></description><link>https://blog.cosmos-institute.org/p/faster-horses</link><guid isPermaLink="false">https://blog.cosmos-institute.org/p/faster-horses</guid><dc:creator><![CDATA[Harry Law]]></dc:creator><pubDate>Fri, 02 Jan 2026 15:03:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!uj_w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff656b484-85dc-4085-a5d8-1f74cacc3489_1600x1099.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!uj_w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff656b484-85dc-4085-a5d8-1f74cacc3489_1600x1099.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!uj_w!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff656b484-85dc-4085-a5d8-1f74cacc3489_1600x1099.png 424w, https://substackcdn.com/image/fetch/$s_!uj_w!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff656b484-85dc-4085-a5d8-1f74cacc3489_1600x1099.png 848w, https://substackcdn.com/image/fetch/$s_!uj_w!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff656b484-85dc-4085-a5d8-1f74cacc3489_1600x1099.png 1272w, https://substackcdn.com/image/fetch/$s_!uj_w!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff656b484-85dc-4085-a5d8-1f74cacc3489_1600x1099.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!uj_w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff656b484-85dc-4085-a5d8-1f74cacc3489_1600x1099.png" width="1456" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f656b484-85dc-4085-a5d8-1f74cacc3489_1600x1099.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!uj_w!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff656b484-85dc-4085-a5d8-1f74cacc3489_1600x1099.png 424w, https://substackcdn.com/image/fetch/$s_!uj_w!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff656b484-85dc-4085-a5d8-1f74cacc3489_1600x1099.png 848w, https://substackcdn.com/image/fetch/$s_!uj_w!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff656b484-85dc-4085-a5d8-1f74cacc3489_1600x1099.png 1272w, https://substackcdn.com/image/fetch/$s_!uj_w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff656b484-85dc-4085-a5d8-1f74cacc3489_1600x1099.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Paolo di Dono, called Uccello, The Hunt in the Forest, c. 1465&#8211;1470</em></figcaption></figure></div><p>&#8220;If I asked people what they wanted, they would have said faster horses.&#8221; The idiom, a widely circulated but likely apocryphal line attributed to Henry Ford, stresses the distance between our ability to picture the future and our ability to make it real. It reminds us that technologies loosen the constraints that shaped past expectations, that deeper shifts usually enact variations in kind as well as magnitude.   </p><p>&#8220;Faster horses&#8221; is a shorthand for folk logic that seems bulletproof at the time but quaint in hindsight. Television as radio with pictures, film as photographed theater, early mobile phones as portable landlines, and the internet as a digital library were all kinds of faster horses. They tell us that big swings don&#8217;t often play well with existing categories, and that new language, heuristics, or classifications are often needed to make sense of them.</p><p>Today, many of those wondering about the downstream impact of thinking machines are <a href="https://manifold.markets/ZviMowshowitz/will-we-develop-leopolds-dropin-rem">on the lookout</a> for AI that can function as a &#8220;remote drop-in worker.&#8221; This refers to a system that replaces a human employee, in essence, by doing roughly the same things under the same conditions. Here, the future appears as a more seamless version of the present rather than something that dramatically changes the shape of work.</p><p>The idea flows from the observation that the majority of jobs in the information economy revolve around making computers do what we want. Word processing, desk research, data analysis, creating presentations, running marketing campaigns and many other tasks are all the end product of keyboard strokes and cursor movements. This is why some long-time AI watchers <a href="https://x.com/deanwball/status/2001068539990696422">reckon</a> Claude Opus 4.5, especially its instantiation within Claude Code, can reasonably be described as an early realization of Artificial General Intelligence (AGI). The same might eventually be true of humanoid robots, especially given they can slot into existing infrastructure without costly redesign, but our focus here is solely on knowledge work.</p><p>As <a href="https://x.com/deepfates/status/2001047747110334516">others</a> have pointed out, the response to a common AGI litmus test (a system that can outperform humans in most economically valuable work) turns on what we categorize as &#8220;economically valuable work.&#8221; If we define that as &#8220;stuff done on a computer,&#8221; then it&#8217;s plausible that one day soon the models will cross that threshold (if Claude 4.5 Opus hasn&#8217;t already). And if a model can be said to be generally capable, then the remote drop-in worker shouldn&#8217;t be too far behind.</p><p>Whether a single model can do a job in isolation is a useful question to ask, but it doesn&#8217;t tell us much about how such systems, interacting with many people and agents of their own, might rearrange patterns of coordination and the shared assumptions that guide them. In some ways, the conservative bet is that the drop-in worker is a stronger account of our present than it is our future. Technologies that matter rarely honor the roles we assign them. If the future is anything like the past, the drop-in worker may prove to be a faster horse: a story that made sense before the true nature of the agent economy became visible.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe to Cosmos Institute for updates including opportunities, content, and programs</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3><strong>The Wisdom of the Crowd</strong></h3><p>Traditional accounts of AGI development often describe the emergence of an isolated system capable of completing the vast majority of cognitive tasks, sometimes referred to as a &#8220;singleton&#8221;. An alternative scenario, now <a href="https://arxiv.org/pdf/2512.16856">seriously considered</a> by AI developers, imagines that capabilities may be manifested through the coordination of &#8220;sub-AGI individual agents&#8221; with complementary skills and affordances. This scenario concerns an ecology of semi-specialized agents whose combined behavior outstrips anything they could do alone (and tallies up to something that we could describe as AGI at a high enough level of abstraction).</p><p>You might have a code agent that builds, a negotiation agent that handles scheduling or purchasing, and a compliance agent that checks your work. On top of these sits a manager that breaks goals into subtasks and shunts each to the right agent for the job. We state an objective, the system spins up a network of agents, they pass data between them, and a synthesis function presents the output of the collective for review.</p><p>Imagine launching a new software feature. The drop-in worker functions like a high-speed freelancer insofar as it writes code, pauses to check for bugs, and writes documentation sequentially within a single stream. It is a linear acceleration of a human workflow. The agent ecology, however, behaves more like a stack of mini-organizations. When the objective is stated, an &#8220;architect agent&#8221; drafts the structure while a &#8220;red team agent&#8221; simultaneously attacks that design to find security flaws before a line of code is written. A &#8220;compliance agent&#8221; cross-references regional data laws in the background. These agents operate in parallel to create an adversarial loop where the output is the sum of many small interactions. The result is an ecosystem capable of the kind of concurrent processing that individual minds, biological or synthetic, may struggle to achieve by themselves.</p><p>But this is only a partial picture. The share of human-agent and agent-agent interactions in the economy will increase over time, with agents <a href="https://blog.cosmos-institute.org/p/coasean-bargaining-at-scale">engaging</a> in price negotiation, placing orders from one another, coordinating supply and demand, and even rating each other to assign trustworthiness scores. </p><p>In some ways, the patchwork AGI thesis is another episode in a long-running story about how intelligence behaves at scale. Markets outperform planners because knowledge never exists in concentrated or integrated forms, but as incomplete and contradictory perspectives dispersed across individuals. Hayek reminds us that &#8220;planning&#8221; happens all over the place through individual agents, which is why he distinguishes it from &#8220;economic planning&#8221; that deals with state-backed forms of enterprise management. The agent economy doesn&#8217;t represent a toss-up between planning or ad-hoc action but rather an older question about whether planning ought to emanate from within or from without. </p><p>Aristotle raised the question that still haunts proponents of collective intelligence: can the many, combining their partial virtues, outperform the excellent few? In Book Three of Politics, he <a href="https://www.loebclassics.com/view/aristotle-politics/1932/pb_LCL264.223.xml?readMode=recto">writes</a>:</p><blockquote><p>&#8220;For it is possible that the many, though not individually good men, yet when they come together may be better, not individually but collectively, than those who are so, just as public dinners to which many contribute are better than those supplied at one man&#8217;s cost; for where there are many, each individual, it may be argued, has some portion of virtue and wisdom, and when they have come together, just as the multitude becomes a single man with many feet and many hands and many senses, so also it becomes one personality as regards the moral and intellectual faculties. This is why the general public is a better judge of the works of music and those of the poets, because different men can judge a different part of the performance, and all of them all of it.&#8221;</p></blockquote><p>For Aristotle, groups become smarter when they successfully combine different aspects of competence into a single body. Consider a jury that sees people with different experiences and biases pool their judgment to reach a fairer conclusion than any juror might in isolation. Or England&#8217;s common law, where centuries of small decisions by judges produce a legal order with more adaptability than one made by decree.</p><p>The same is true of Wikipedia, peer review, or <a href="https://www.youtube.com/watch?v=CaVFfqSk_Sc">nimble companies</a>. In each case, the quality of the outcome rests on a kind of distributed deliberation wherein perspectives clash, revise, correct, and eventually settle into a stable state. It echoes the Athenian assembly and the medieval <em>disputatio</em>, both of which treated the good we call judgment as the product of structured disagreement.</p><p>American writer Howard Rheingold coined the term &#8220;smart mobs&#8221; to describe groups of people who are able to organize and coordinate quickly through the use of mobile communication technologies like the internet or mobile phones. The term &#8220;mob&#8221; is deliberately ambivalent, a framing he uses because of its darker connotations (he explicitly notes mob mentality can be for good or ill).</p><p>Rheingold thought smart mobs worked because low-cost communication let individuals share context and act in concert without central control. These groups represented an idealized version of accelerated coordination built from minuscule signals that could be aggregated over huge numbers of agents. The mob framing reminds us that coordination capacity increases faster than deliberative capacity, and as a result, the key variable becomes governance of the communications substrate.</p><p>But mobs aren&#8217;t smart by default. There are a whole set of coordination problems that flow from distributed decision-making, from free-riding (where individuals benefit from a group&#8217;s effort without contributing to it) to information cascades (where people copy others&#8217; choices even when their own judgment points elsewhere). Many of us recognize some of these problems when we spend too much time on social media. We see outrage spread through networks faster than facts, and know how easily a crowd can be steered by sentiment rather than the hard work of judgment.</p><p>We could say that groups become dumb when they fail to properly synthesize knowledge, and they become smart when divergence is preserved and integrated. Whether or not we benefit from the wisdom of the crowd often depends on the structures that keep the mob in check. Markets do this with prices and labs with peer review. Rheingold might say that smart mobs emerge when communication structure and incentives reward decentralized coordination rather than herd behavior.</p><h3><strong>Society of Mind</strong></h3><p>In the 1980s, the AI researcher Marvin Minsky wrote about what he called the &#8220;society of mind.&#8221; What he meant was that unified intelligence is a loose federation of smaller processes, each narrow, each fallible, yet together capable of producing something that looks like coherent thought given enough altitude. For Minsky, intelligence emerges from many mindless &#8220;agents&#8221; coordinated in special ways, with the mind employing something like a computational and explanatory strategy whose power is a product of messiness, cross-connection, coordination, and resolution.</p><p>Today&#8217;s AI models demonstrate unified intelligence at two levels: as a byproduct of statistical learning and in the way models are housed within larger constellations that we refer to as &#8220;systems.&#8221; Transformers likely generalize because they compress huge corpora into representations that let them improvise solutions on the fly. Intelligence is in some sense a property of compression plus scale, an analog of Aristotle&#8217;s crowd insofar as it concerns many partial signals integrated into a single effective whole. As for the <a href="https://www.learningfromexamples.com/p/academics-need-to-take-ai-seriously">systemization</a> of models, we can view each as a constellation of individual expert functions like tooling or multi-modal functionality.</p><p>But why stop there? If we accept that AI, like all intelligence, benefits from the interactions between discrete units, it follows that its capability should also be treated as a property of a larger constellation in which many systems operate together. Variance creates productive tension as models surface alternative interpretations and explore distinct solution paths. When those paths are combined through orchestration layers, tool use, or various other kinds of agent frameworks, the result is a system that searches the problem space more effectively than a lone model. Once multi-model systems interact with one another &#8212; coordinating, passing intermediate results, or checking each other&#8217;s claims &#8212; a kind of higher-order intelligence bubbles up from the sum of interactions across several layers. More powerful models are great, but superior ecologies are better.</p><p>People each hold fragments of truth, most of it tacit and hard to articulate, which is why spontaneous order tends to get the better of best laid plans. Polanyi might remind us that a drop-in worker presumes that competence is formalizable into explicit tasks and checklists. His work tells us that competence lives partly in the realm of tacit knowledge, that &#8220;dropping in&#8221; workers will face the same context problems faced by central planners the world over.</p><p>Of course, recent progress in AI development does typically try to provide models with the context they need to work effectively (specifically through the use of reinforcement learning techniques to make models good at human work in human settings). We might also say that, even if multi-agent systems matter internally for model cohesion, their deployment could still take the form of a remote-worker analog. Things like permissions, accountability, compliance, budgeting, and change-management favor inserting agents into existing workflows as opposed to redesigning them from the ground up.</p><p>These objections are useful, but they don&#8217;t allow us to skirt the core problem with the &#8220;remote drop-in worker&#8221; metaphor: that it treats intelligence as <em>solely</em> a property of individuals. It presumes the unit of analysis is the solitary agent carrying out tasks one after another, when everything we know about complex work suggests otherwise. Real capability comes from the knots of relationships, feedback loops, constraints, and opportunities that bind us together. First within models, then as agent systems, then eventually as agent-agent systems. Collective intelligence is prefigured by how information moves through a system and how the residue of experience accumulates across many small decisions made by each of us.</p><p>The remote drop-in worker may prove to be a transitory moment at best and a category error at worst, one that treats AI as an incremental addition to familiar workflows rather than a force that will reshape the nature of those workflows. That is &#8220;faster horses&#8221; thinking. We&#8217;re projecting today&#8217;s limitations onto tomorrow&#8217;s world and overlooking the fact that new capabilities alter the constraints that make what happens today seem natural. More accurate accounts tend to lead somewhere else, in a form we often only recognize with the benefit of hindsight.</p><div><hr></div><p><em><a href="https://cosmos-institute.org/">Cosmos Institute</a> is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund fast prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for updates and essays</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Philosopher-Builder Winter Reads]]></title><description><![CDATA[Recommendations from Matt Clifford, Greg Lukianoff, Virginia Postrel, S&#233;b Krier and others in the Cosmos network]]></description><link>https://blog.cosmos-institute.org/p/philosopher-builder-winter-reads</link><guid isPermaLink="false">https://blog.cosmos-institute.org/p/philosopher-builder-winter-reads</guid><dc:creator><![CDATA[Cosmos Institute]]></dc:creator><pubDate>Fri, 19 Dec 2025 15:03:11 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5a09fee6-2dc6-45a2-81a1-adaaf6185cb6_990x546.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Many of you loved our <a href="https://blog.cosmos-institute.org/p/philosopherbuilder-summer-reads-2025">Summer Reading List</a>. </p><p>And so, with Christmas fast approaching, we asked 12 more of the sharpest minds we know for their philosopher-builder recommendations for you to settle down with over the holidays.</p><p>From &#8216;how thinking emerges&#8217; to &#8216;Universal Robots&#8217; to &#8216;LLMs as the death of the author,&#8217; here are 12 picks from top AI entrepreneurs and thinkers to kickstart your ideas for 2026.</p><p>Read on for reflections from each recommender + a bonus recommendation from our Editorial Lead, Harry Law.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe to Cosmos Institute for free to receive new essays from our network</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5Iza!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd77c7811-bb83-448a-bb75-6ddd8ad47813_4320x5612.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5Iza!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd77c7811-bb83-448a-bb75-6ddd8ad47813_4320x5612.png 424w, https://substackcdn.com/image/fetch/$s_!5Iza!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd77c7811-bb83-448a-bb75-6ddd8ad47813_4320x5612.png 848w, https://substackcdn.com/image/fetch/$s_!5Iza!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd77c7811-bb83-448a-bb75-6ddd8ad47813_4320x5612.png 1272w, https://substackcdn.com/image/fetch/$s_!5Iza!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd77c7811-bb83-448a-bb75-6ddd8ad47813_4320x5612.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5Iza!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd77c7811-bb83-448a-bb75-6ddd8ad47813_4320x5612.png" width="1456" height="1891" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d77c7811-bb83-448a-bb75-6ddd8ad47813_4320x5612.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1891,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:14555908,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.cosmos-institute.org/i/180976600?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd77c7811-bb83-448a-bb75-6ddd8ad47813_4320x5612.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5Iza!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd77c7811-bb83-448a-bb75-6ddd8ad47813_4320x5612.png 424w, https://substackcdn.com/image/fetch/$s_!5Iza!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd77c7811-bb83-448a-bb75-6ddd8ad47813_4320x5612.png 848w, https://substackcdn.com/image/fetch/$s_!5Iza!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd77c7811-bb83-448a-bb75-6ddd8ad47813_4320x5612.png 1272w, https://substackcdn.com/image/fetch/$s_!5Iza!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd77c7811-bb83-448a-bb75-6ddd8ad47813_4320x5612.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h3><strong>1. The Invention of Science</strong></h3><p><strong>by David Wootton</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!iobV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22f870e1-73e6-4f1d-8a25-8ea098e38eeb_661x1000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!iobV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22f870e1-73e6-4f1d-8a25-8ea098e38eeb_661x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!iobV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22f870e1-73e6-4f1d-8a25-8ea098e38eeb_661x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!iobV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22f870e1-73e6-4f1d-8a25-8ea098e38eeb_661x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!iobV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22f870e1-73e6-4f1d-8a25-8ea098e38eeb_661x1000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!iobV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22f870e1-73e6-4f1d-8a25-8ea098e38eeb_661x1000.jpeg" width="263" height="397.8819969742814" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/22f870e1-73e6-4f1d-8a25-8ea098e38eeb_661x1000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:661,&quot;resizeWidth&quot;:263,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;The Invention of Science: A New History of the Scientific Revolution&quot;,&quot;title&quot;:&quot;The Invention of Science: A New History of the Scientific Revolution&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="The Invention of Science: A New History of the Scientific Revolution" title="The Invention of Science: A New History of the Scientific Revolution" srcset="https://substackcdn.com/image/fetch/$s_!iobV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22f870e1-73e6-4f1d-8a25-8ea098e38eeb_661x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!iobV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22f870e1-73e6-4f1d-8a25-8ea098e38eeb_661x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!iobV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22f870e1-73e6-4f1d-8a25-8ea098e38eeb_661x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!iobV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22f870e1-73e6-4f1d-8a25-8ea098e38eeb_661x1000.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p>It&#8217;s a readable and staggeringly erudite work of intellectual history. I&#8217;ve read it once, listened to it once, and have embarked on a second reading.</p><p>Before the discovery of America, Wootton argues convincingly, Europeans lacked the concept of discovery. No language but Portuguese even had an adequate term. European intellectuals assumed that any new invention or idea must be a recovery of something previously known but lost or forgotten. (Never before reading this book had I realized just how weird the Aristotelean picture of our planet was and thus how disruptive the voyages of discovery were to its factual claims.)</p><p>On my second reading, I was struck by Wootton&#8217;s description of Diderot: &#8220;Diderot had one great advantage over us: graduating from the Sorbonne in 1732, he had been educated in the world of Aristotelian philosophy. He knew how shocking the destruction of that world had been, for he had experienced it at first hand. From a bird&#8217;s-eye view&#8212;the historian&#8217;s view&#8212;the Scientific Revolution is a long, slow process, beginning with Tycho Brahe and ending with Newton. But for the individuals caught up in it&#8212;for Galileo, Hooke, Boyle and their colleagues&#8212;it represents a series of sudden, urgent transformations.&#8221; </p><p>As we anticipate sudden, urgent transformations from the deployment of powerful AI, <em><a href="https://www.amazon.com/Invention-Science-History-Scientific-Revolution/dp/0061759538">The Invention of Science</a></em> offers a deep dive into a parallel heady, disruptive process.</p><p><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Virginia Postrel&quot;,&quot;id&quot;:1666060,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fd33be26b-792d-41af-ad2d-173221f5e907_406x512.jpeg&quot;,&quot;uuid&quot;:&quot;ad1295d9-17c8-4d0b-b5fb-d9bbb79390be&quot;}" data-component-name="MentionToDOM"></span>,<br></strong><em>Author, a columnist for Works in Progress, and an Abundance Institute fellow. She and Charles M. Mann will release a podcast series on the history of everyday technologies in early 2026.</em></p></blockquote><div><hr></div><h3><strong>2. Vehicles</strong></h3><p><strong>by Valentino Braitenberg</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OEPB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7e189a-984c-4e5e-aff4-ffc915499ef1_657x1000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OEPB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7e189a-984c-4e5e-aff4-ffc915499ef1_657x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!OEPB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7e189a-984c-4e5e-aff4-ffc915499ef1_657x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!OEPB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7e189a-984c-4e5e-aff4-ffc915499ef1_657x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!OEPB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7e189a-984c-4e5e-aff4-ffc915499ef1_657x1000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OEPB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7e189a-984c-4e5e-aff4-ffc915499ef1_657x1000.jpeg" width="263" height="400.30441400304414" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6a7e189a-984c-4e5e-aff4-ffc915499ef1_657x1000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:657,&quot;resizeWidth&quot;:263,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Vehicles &#8211; Experiments in Synthetic Psychology: Amazon.co.uk: Braitenberg:  9780262521123: Books&quot;,&quot;title&quot;:&quot;Vehicles &#8211; Experiments in Synthetic Psychology: Amazon.co.uk: Braitenberg:  9780262521123: Books&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Vehicles &#8211; Experiments in Synthetic Psychology: Amazon.co.uk: Braitenberg:  9780262521123: Books" title="Vehicles &#8211; Experiments in Synthetic Psychology: Amazon.co.uk: Braitenberg:  9780262521123: Books" srcset="https://substackcdn.com/image/fetch/$s_!OEPB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7e189a-984c-4e5e-aff4-ffc915499ef1_657x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!OEPB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7e189a-984c-4e5e-aff4-ffc915499ef1_657x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!OEPB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7e189a-984c-4e5e-aff4-ffc915499ef1_657x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!OEPB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7e189a-984c-4e5e-aff4-ffc915499ef1_657x1000.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p>Now over 40 years old and long predating modern AI, <em><a href="https://www.amazon.com/Vehicles-Experiments-Psychology-Valentino-Braitenberg/dp/0262521121/ref=sr_1_1?crid=TUETLFW3K44S&amp;dib=eyJ2IjoiMSJ9.9SIPuc92cHd_8cCiA4n0F45Fof79YK20RC43XtyqQn8.DMeB_NEcT8aityu1XlUPdEtCHfBGpw-O5UcDV7384Ko&amp;dib_tag=se&amp;keywords=vehicles+braitenburg&amp;qid=1765135625&amp;sprefix=vehicles+brai%2Caps%2C101&amp;sr=8-1">Vehicles</a></em> challenges us to think about the relationship between behaviour and intelligence. </p><p>Through a series of thought experiments, Braitenberg shows us that many things that look like &#8220;thinking&#8221; can emerge from simple, deterministic rules. As we increasingly interact with machines that (look like they) think, having good mental models of where complex behaviour comes from will be an essential component of epistemic hygiene!<br><br><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Matt Clifford&quot;,&quot;id&quot;:866453,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/369ab4b6-76f7-4d50-9d04-21a48b67a947_800x800.jpeg&quot;,&quot;uuid&quot;:&quot;57f2419c-935a-4d82-952e-f3217fe0cfd5&quot;}" data-component-name="MentionToDOM"></span>,<br></strong><em>Co-Founder of Entrepreneurs First, and Chair of the Advanced Research + Invention Agency (ARIA).</em></p></blockquote><div><hr></div><h3><strong>3. The Knowledge Machine</strong></h3><p><strong>by Michael Strevens</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RuVJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae240fcd-007e-42f4-a6c0-2aa6fb1862f0_789x1200.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RuVJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae240fcd-007e-42f4-a6c0-2aa6fb1862f0_789x1200.jpeg 424w, https://substackcdn.com/image/fetch/$s_!RuVJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae240fcd-007e-42f4-a6c0-2aa6fb1862f0_789x1200.jpeg 848w, https://substackcdn.com/image/fetch/$s_!RuVJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae240fcd-007e-42f4-a6c0-2aa6fb1862f0_789x1200.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!RuVJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae240fcd-007e-42f4-a6c0-2aa6fb1862f0_789x1200.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RuVJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae240fcd-007e-42f4-a6c0-2aa6fb1862f0_789x1200.jpeg" width="263" height="400" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ae240fcd-007e-42f4-a6c0-2aa6fb1862f0_789x1200.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:789,&quot;resizeWidth&quot;:263,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;The Knowledge Machine &#8211; How Irrationality Created Modern Science:  Amazon.co.uk: Strevens, Michael: 9781631491375: Books&quot;,&quot;title&quot;:&quot;The Knowledge Machine &#8211; How Irrationality Created Modern Science:  Amazon.co.uk: Strevens, Michael: 9781631491375: Books&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="The Knowledge Machine &#8211; How Irrationality Created Modern Science:  Amazon.co.uk: Strevens, Michael: 9781631491375: Books" title="The Knowledge Machine &#8211; How Irrationality Created Modern Science:  Amazon.co.uk: Strevens, Michael: 9781631491375: Books" srcset="https://substackcdn.com/image/fetch/$s_!RuVJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae240fcd-007e-42f4-a6c0-2aa6fb1862f0_789x1200.jpeg 424w, https://substackcdn.com/image/fetch/$s_!RuVJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae240fcd-007e-42f4-a6c0-2aa6fb1862f0_789x1200.jpeg 848w, https://substackcdn.com/image/fetch/$s_!RuVJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae240fcd-007e-42f4-a6c0-2aa6fb1862f0_789x1200.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!RuVJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae240fcd-007e-42f4-a6c0-2aa6fb1862f0_789x1200.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p>I am glad I read Michael Strevens&#8217;s <em><a href="https://www.amazon.com/Knowledge-Machine-Irrationality-Created-Science/dp/1631491377">The Knowledge Machine</a></em>: How Irrationality Created Modern Science. I bristled at some parts of it, but I did appreciate its clarity. </p><p>Strevens writes in a stripped-down way about what makes modern science different: his &#8220;iron rule&#8221; that we&#8217;re supposed to bracket off things like beauty, morality, and metaphysics and let only publicly checkable evidence into our arguments.</p><p>I&#8217;m not persuaded that this is in any way &#8220;irrational&#8221; (hell, Hume did more damage with rationality than that), and I think we should all probably be rereading Karl Popper&#8217;s <em><a href="https://www.amazon.com/Open-Society-Enemies-Princeton-Classics/dp/0691210845/ref=sr_1_1?crid=2Z185POC9YZ3U&amp;dib=eyJ2IjoiMSJ9.43aOu79JO8Qsjgh0IUMaIMyjWbDLxzAsG61BI41aNlU-jKglrvzwHmfsezrHs1H2zMZh6Ki0VVObB4BpnzwLxahGh-Wb0Z1NUOJ-BAYetVhaS2BCPH8xc4x6d2Ba4tO2uthmYCvaQg99f9PLamIGvZgZ5KBVxgez9MstbexEart7wQQIO_6VXwCGTPfPRKE3w0bS8j4Tp2a09cxUnoC0XuQSnXrQY8Gr5MUrnVbmr40.8h7Ul6TRdR1Zj67uDk-6z4jmewr-s6Aieo66EKXROow&amp;dib_tag=se&amp;keywords=karl+popper+the+open+society+and+its+enemies&amp;nsdOptOutParam=true&amp;qid=1765135512&amp;sprefix=karl+popper+the+op%2Caps%2C117&amp;sr=8-1">The Open Society and Its Enemies</a></em> if we want the full-throated defense of liberal science. For how we actually build better decision- and knowledge-making systems at the institutional level, I&#8217;d also send people back to Montesquieu and <em><a href="https://guides.loc.gov/federalist-papers/full-text">The Federalist Papers</a></em>, which treat checks, balances, and structured disagreement as epistemic tools as much as political ones. But <em>The Knowledge Machine</em> is a lucid, accessible, and genuinely interesting thought experiment. For anyone who cares about how we know what we know (especially technologists building new &#8220;knowledge machines&#8221; of their own), it&#8217;s well worth a read&#8212;ideally with you arguing with it in the margins.<br><br><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Greg Lukianoff&quot;,&quot;id&quot;:4128062,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!LmOD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc350f817-9e22-4e92-ab30-308fe4a41ea6_2212x3319.jpeg&quot;,&quot;uuid&quot;:&quot;948c54ea-fd4b-4581-92f8-a7f93f641142&quot;}" data-component-name="MentionToDOM"></span>,</strong><br><em>President &amp; CEO at the Foundation for Individual Rights and Expression (FIRE). Author of <a href="https://www.amazon.co.uk/Coddling-American-Mind-Intentions-Generation/dp/0735224897">The Coddling of the American Mind</a>.</em></p></blockquote><div><hr></div><h3><strong>4. The Listening Society</strong></h3><p><strong>by Hanzi Freinacht</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RJEe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62795f72-730e-475a-ac95-c4a9aa22944c_324x500.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RJEe!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62795f72-730e-475a-ac95-c4a9aa22944c_324x500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!RJEe!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62795f72-730e-475a-ac95-c4a9aa22944c_324x500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!RJEe!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62795f72-730e-475a-ac95-c4a9aa22944c_324x500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!RJEe!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62795f72-730e-475a-ac95-c4a9aa22944c_324x500.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RJEe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62795f72-730e-475a-ac95-c4a9aa22944c_324x500.jpeg" width="260" height="401.2345679012346" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/62795f72-730e-475a-ac95-c4a9aa22944c_324x500.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:500,&quot;width&quot;:324,&quot;resizeWidth&quot;:260,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;The Listening Society: A Metamodern Guide to Politics, Book One: Volume 1  (Metamodern Guides): Amazon.co.uk: Freinacht, Hanzi: 9788799973903: Books&quot;,&quot;title&quot;:&quot;The Listening Society: A Metamodern Guide to Politics, Book One: Volume 1  (Metamodern Guides): Amazon.co.uk: Freinacht, Hanzi: 9788799973903: Books&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="The Listening Society: A Metamodern Guide to Politics, Book One: Volume 1  (Metamodern Guides): Amazon.co.uk: Freinacht, Hanzi: 9788799973903: Books" title="The Listening Society: A Metamodern Guide to Politics, Book One: Volume 1  (Metamodern Guides): Amazon.co.uk: Freinacht, Hanzi: 9788799973903: Books" srcset="https://substackcdn.com/image/fetch/$s_!RJEe!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62795f72-730e-475a-ac95-c4a9aa22944c_324x500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!RJEe!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62795f72-730e-475a-ac95-c4a9aa22944c_324x500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!RJEe!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62795f72-730e-475a-ac95-c4a9aa22944c_324x500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!RJEe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62795f72-730e-475a-ac95-c4a9aa22944c_324x500.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p>For transformative AI to help humans flourish, much depends upon the wisdom of those designing and deploying it. Yet most technologists are blindly running on obsolescent ideas while firmly believing they are not &#8212; captured by unifaceted ideologies about progress or power. <em><a href="https://www.amazon.com/Listening-Society-Metamodern-Politics-Guides/dp/B099Y96ZBL/ref=sr_1_1?crid=116BK9RJGL1DI&amp;dib=eyJ2IjoiMSJ9.8z4i6pKwzp_Tcdf7EPI3Ug.u_DIKNOEViJQhBFR8XgjBd5ihbvS2qUntD9Y_qLSYIg&amp;dib_tag=se&amp;keywords=the+listening+society+by+hanzi+freinacht&amp;qid=1765135473&amp;sprefix=the+listening+society%2Caps%2C118&amp;sr=8-1">The Listening Society</a></em> proposes a philosophy of flourishing that can meet the complexity and nuance of the world. It&#8217;s one of the most impactful books I&#8217;ve ever read. Even if you don&#8217;t like it, at the very least it will provoke new thoughts.</p><p><strong>Joel Lehman,<br></strong><em>Former OpenAI co-team lead on openendeness. Author of <a href="https://www.amazon.co.uk/Why-Greatness-Cannot-Planned-Objective/dp/3319155237">Why Greatness Cannot be Planned</a>. Cosmos Fellow.</em></p></blockquote><div><hr></div><h3><strong>5. What Is Political Philosophy?</strong></h3><p><strong>by Leo Strauss</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JMPl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7070598e-9941-447a-ac98-68153270be5b_860x1329.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JMPl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7070598e-9941-447a-ac98-68153270be5b_860x1329.jpeg 424w, https://substackcdn.com/image/fetch/$s_!JMPl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7070598e-9941-447a-ac98-68153270be5b_860x1329.jpeg 848w, https://substackcdn.com/image/fetch/$s_!JMPl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7070598e-9941-447a-ac98-68153270be5b_860x1329.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!JMPl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7070598e-9941-447a-ac98-68153270be5b_860x1329.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JMPl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7070598e-9941-447a-ac98-68153270be5b_860x1329.jpeg" width="259" height="400.2453488372093" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7070598e-9941-447a-ac98-68153270be5b_860x1329.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1329,&quot;width&quot;:860,&quot;resizeWidth&quot;:259,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;What is Political Philosophy? And Other Studies, Strauss&quot;,&quot;title&quot;:&quot;What is Political Philosophy? And Other Studies, Strauss&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="What is Political Philosophy? And Other Studies, Strauss" title="What is Political Philosophy? And Other Studies, Strauss" srcset="https://substackcdn.com/image/fetch/$s_!JMPl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7070598e-9941-447a-ac98-68153270be5b_860x1329.jpeg 424w, https://substackcdn.com/image/fetch/$s_!JMPl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7070598e-9941-447a-ac98-68153270be5b_860x1329.jpeg 848w, https://substackcdn.com/image/fetch/$s_!JMPl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7070598e-9941-447a-ac98-68153270be5b_860x1329.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!JMPl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7070598e-9941-447a-ac98-68153270be5b_860x1329.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p>In <em><a href="https://www.amazon.com/What-Political-Philosophy-Other-Studies/dp/0226777138/ref=sr_1_1?crid=3HEP5VGHC1IH8&amp;dib=eyJ2IjoiMSJ9.CVmcq4y_ZFqnflUipKmHiDGY56mIUpFqt_K-Z9e6EcHGjHj071QN20LucGBJIEps.GRrYSfuUnJQisNPfuOh-Ghx7UpCd3AaBySDvMLZQq40&amp;dib_tag=se&amp;keywords=what+is+political+philosophy+leo+strauss&amp;qid=1765135445&amp;sprefix=what+is+political+phi%2Caps%2C120&amp;sr=8-1">What Is Political Philosophy?</a></em>, Strauss highlights how modern thought has substituted key classical concepts with new ones - most notably replacing the classical idea of <em>virtue</em> with the modern notion of <em>responsibility</em>.</p><p>The implication of this shift is that we now frame our behavior and moral standards in ways that make them easier to satisfy, even as we erode the substance of the ideals they once represented, diminishing the demanding excellence at the core of virtue.</p><p>I would encourage our generation of technologists to examine our industry&#8217;s terminology in light of this provocation. When we invoke words like <em>courage</em> or <em>freedom</em>, what work are they doing for us - and do they still call us to excellence, or serve to comfort us with a veneer of virtue?</p><p><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Lisa Wehden&quot;,&quot;id&quot;:1090965,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3eb0fe30-3fd3-419f-bdb6-6830c0468d7d_636x636.jpeg&quot;,&quot;uuid&quot;:&quot;fc994af1-7ddf-43a5-8764-bf36c5877f25&quot;}" data-component-name="MentionToDOM"></span>,<br></strong><em>Founder and</em> <em>CEO at <a href="https://www.plymouthstreet.com/">Plymouth Street</a>. Former President of the Oxford Union.</em></p></blockquote><div><hr></div><h3><strong>6. R.U.R.</strong></h3><p><strong>by Karel &#268;apek</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SSjE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d4ebdb2-5917-44d2-9872-fa078f6dc0f0_642x1000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SSjE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d4ebdb2-5917-44d2-9872-fa078f6dc0f0_642x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!SSjE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d4ebdb2-5917-44d2-9872-fa078f6dc0f0_642x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!SSjE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d4ebdb2-5917-44d2-9872-fa078f6dc0f0_642x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!SSjE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d4ebdb2-5917-44d2-9872-fa078f6dc0f0_642x1000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SSjE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d4ebdb2-5917-44d2-9872-fa078f6dc0f0_642x1000.jpeg" width="258" height="401.86915887850466" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1d4ebdb2-5917-44d2-9872-fa078f6dc0f0_642x1000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:642,&quot;resizeWidth&quot;:258,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;R.U.R. (Rossum's Universal Robots) (Penguin Classics): Amazon.co.uk: Capek,  Karel, Klima, Ivan, Novack-Jones, Claudia: 9780141182087: Books&quot;,&quot;title&quot;:&quot;R.U.R. (Rossum's Universal Robots) (Penguin Classics): Amazon.co.uk: Capek,  Karel, Klima, Ivan, Novack-Jones, Claudia: 9780141182087: Books&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="R.U.R. (Rossum's Universal Robots) (Penguin Classics): Amazon.co.uk: Capek,  Karel, Klima, Ivan, Novack-Jones, Claudia: 9780141182087: Books" title="R.U.R. (Rossum's Universal Robots) (Penguin Classics): Amazon.co.uk: Capek,  Karel, Klima, Ivan, Novack-Jones, Claudia: 9780141182087: Books" srcset="https://substackcdn.com/image/fetch/$s_!SSjE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d4ebdb2-5917-44d2-9872-fa078f6dc0f0_642x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!SSjE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d4ebdb2-5917-44d2-9872-fa078f6dc0f0_642x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!SSjE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d4ebdb2-5917-44d2-9872-fa078f6dc0f0_642x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!SSjE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d4ebdb2-5917-44d2-9872-fa078f6dc0f0_642x1000.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p>Karel &#268;apek&#8217;s 1920 <em><a href="https://www.amazon.com/dp/0141182083/?bestFormat=true&amp;k=r.u.r&amp;ref_=nb_sb_ss_w_scx-ent-bk-ww_k0_1_6_de&amp;crid=N7YJMJ0JM8D4&amp;sprefix=r.u.r.">R.U.R.</a></em>, which introduced the word &#8220;robot&#8221; in its modern sense, has food for thought on almost every aspect of our current technological landscape. There&#8217;s a scientist who built robots because he wanted to play God, and his nephew, a founder, who just wanted to make money. There&#8217;s a General Manager who believes his product will liberate humans from the drudgery of work, and an engineer who loves his job. There&#8217;s a humanitarian who worries about the robots&#8217; souls, while widespread infertility decimates the human population. In the background of it all, there&#8217;s the question: what is it, really, to be human? And what, if anything, is that humanity worth?<br><br><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Harvey Lederman&quot;,&quot;id&quot;:16665441,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3fef11fa-af09-4b07-ab6f-01e8b755a35c_287x287.jpeg&quot;,&quot;uuid&quot;:&quot;b03ee65b-2177-4266-bfaa-c2f5d9d874c8&quot;}" data-component-name="MentionToDOM"></span>,<br></strong><em>Professor of Philosophy at UT Austin. Writer of <a href="https://scottaaronson.blog/?p=9030">ChatGPT and the Meaning of Life</a>.</em></p></blockquote><div><hr></div><h3><strong>7. Aspiration</strong></h3><p><strong>by Agnes Callard</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_bUT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0e32c6-e140-46fe-9911-3498295afdac_609x921.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_bUT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0e32c6-e140-46fe-9911-3498295afdac_609x921.png 424w, https://substackcdn.com/image/fetch/$s_!_bUT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0e32c6-e140-46fe-9911-3498295afdac_609x921.png 848w, https://substackcdn.com/image/fetch/$s_!_bUT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0e32c6-e140-46fe-9911-3498295afdac_609x921.png 1272w, https://substackcdn.com/image/fetch/$s_!_bUT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0e32c6-e140-46fe-9911-3498295afdac_609x921.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_bUT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0e32c6-e140-46fe-9911-3498295afdac_609x921.png" width="265" height="400.76354679802955" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4e0e32c6-e140-46fe-9911-3498295afdac_609x921.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:921,&quot;width&quot;:609,&quot;resizeWidth&quot;:265,&quot;bytes&quot;:361248,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.cosmos-institute.org/i/180976600?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0e32c6-e140-46fe-9911-3498295afdac_609x921.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!_bUT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0e32c6-e140-46fe-9911-3498295afdac_609x921.png 424w, https://substackcdn.com/image/fetch/$s_!_bUT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0e32c6-e140-46fe-9911-3498295afdac_609x921.png 848w, https://substackcdn.com/image/fetch/$s_!_bUT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0e32c6-e140-46fe-9911-3498295afdac_609x921.png 1272w, https://substackcdn.com/image/fetch/$s_!_bUT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e0e32c6-e140-46fe-9911-3498295afdac_609x921.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p><em><a href="https://www.amazon.com/Aspiration-Agency-Becoming-Agnes-Callard/dp/0190085142/ref=tmm_pap_swatch_0">Aspiration</a></em> is a thoughtful book, especially useful as an indirect and unintended commentary on much discourse about AI alignment. This discourse often assumes what Callard calls a &#8220;decision-theoretic&#8221; model of values as fixed and unchanging. Instead, Callard argues that value change, not just accidental but sought out, is a core part of how human values work.</p><p><em>(Rudolf also recommended <a href="https://www.amazon.com/Liberty-Incorporating-Four-Essays/dp/019924989X/">Liberty</a> by Isaiah Berlin, particularly the essays on J.S.Mill)</em></p><p><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Rudolf Laine&quot;,&quot;id&quot;:46405634,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F816cef70-50a0-4954-a8ce-8f712e1248e8_460x460.png&quot;,&quot;uuid&quot;:&quot;3e4aec59-c77d-4a2c-a49a-a126e362b90e&quot;}" data-component-name="MentionToDOM"></span>,<br></strong><em>Co-founder of Workshop Labs, ML Engineer, and Writer at <a href="https://www.nosetgauge.com/">No Set Gauge</a>.</em></p></blockquote><div><hr></div><h3>8. A Pattern Language</h3><p><strong>by Christopher Alexander, Murray Silverstein, and Sara Ishikawa</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yp7d!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4676f306-3217-4307-90d6-8fdf48d7b942_655x1000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yp7d!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4676f306-3217-4307-90d6-8fdf48d7b942_655x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!yp7d!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4676f306-3217-4307-90d6-8fdf48d7b942_655x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!yp7d!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4676f306-3217-4307-90d6-8fdf48d7b942_655x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!yp7d!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4676f306-3217-4307-90d6-8fdf48d7b942_655x1000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yp7d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4676f306-3217-4307-90d6-8fdf48d7b942_655x1000.jpeg" width="263" height="401.5267175572519" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4676f306-3217-4307-90d6-8fdf48d7b942_655x1000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:655,&quot;resizeWidth&quot;:263,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;A Pattern Language: Towns, Buildings, Construction: 2 (Center for  Environmental Structure Series): Amazon.co.uk: Alexander, Christopher: ...&quot;,&quot;title&quot;:&quot;A Pattern Language: Towns, Buildings, Construction: 2 (Center for  Environmental Structure Series): Amazon.co.uk: Alexander, Christopher: ...&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="A Pattern Language: Towns, Buildings, Construction: 2 (Center for  Environmental Structure Series): Amazon.co.uk: Alexander, Christopher: ..." title="A Pattern Language: Towns, Buildings, Construction: 2 (Center for  Environmental Structure Series): Amazon.co.uk: Alexander, Christopher: ..." srcset="https://substackcdn.com/image/fetch/$s_!yp7d!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4676f306-3217-4307-90d6-8fdf48d7b942_655x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!yp7d!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4676f306-3217-4307-90d6-8fdf48d7b942_655x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!yp7d!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4676f306-3217-4307-90d6-8fdf48d7b942_655x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!yp7d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4676f306-3217-4307-90d6-8fdf48d7b942_655x1000.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p><em><a href="https://www.amazon.com/Pattern-Language-Buildings-Construction-Environmental/dp/B0DD9GR3RZ/ref=sr_1_1?crid=2G71SMMG7GWM5&amp;dib=eyJ2IjoiMSJ9.1DvDrYKmCXKu2-xzSP5Y_3ypE0Hz_e9D1cxHV3QN2T4NvctzT5wXSu_hFE2kye3lfDDoE4H14J4BZQRo_S9UMpuDtaGpInGtJgDOm8jHJXtTfH_H8fXb-QXvKDgWQpKmwAKt4AQYR00NfbFHBouAqHnYaOdCiIHpTK2LVb0MmW3gWSEms1kv3dLQlKOei6PVstXiSYuY6uADGay513Dt_y8_8DBujEtcQC05IbqJSPM.jEmhzlpHDmwXBtKT2Qn1Vj4TDXvixIw-hB09jUB66d0&amp;dib_tag=se&amp;keywords=A+Pattern+Language&amp;qid=1765777266&amp;s=books&amp;sprefix=a+pattern+language%2Cstripbooks%2C226&amp;sr=1-1">A Pattern Language</a></em> might be on the shelf of every architect, but it&#8217;s more broadly about understanding the conditions that allow human life to flourish. </p><p>How do ideas spread? What creates a sense of belonging? Why don&#8217;t people dance in the streets anymore? The authors identify a series of &#8220;patterns&#8221; (which also come across beautifully as a love letter to design details) that create places where people feel alive and connected.</p><p>Reading between the lines and beyond just sometimes outdated architecture, these patterns are responses to deep human needs - enabling shifts between privacy and spontaneous encounter, exploring the role of different &#8220;distances&#8221; of engagement. A vocabulary not just for thinking about physical spaces but about what we&#8217;re trying to create when we design for human thriving. This is less of a &#8220;read all at once&#8221; and more of a &#8220;thumb through like recipes&#8221; book for me.</p><p><em>(Caitlin also recommended <a href="https://www.amazon.co.uk/Speculative-Everything-Design-Fiction-Dreaming/dp/0262019841">Speculative Everything: Design, Fiction, and Social Dreaming</a> by Anthony Dunne and Fiona Raby)</em></p><p><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Caitlin Morris&quot;,&quot;id&quot;:140421181,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a0384c03-63b0-41a9-83de-2cc5ea8c4836_2311x2311.jpeg&quot;,&quot;uuid&quot;:&quot;245ee884-e64f-48e4-a3bb-b6ae890605ae&quot;}" data-component-name="MentionToDOM"></span>,<br></strong><em>MIT Media Lab Researcher and Educator who designs technology for <a href="https://blog.cosmos-institute.org/p/social-tinkering-why-collaborative">curiosity-led social learning</a>. Cosmos Grantee.</em></p></blockquote><div><hr></div><h3><strong>9. The Conscious Mind</strong></h3><p><strong>by David Chalmers</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!loLZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffceb19b5-1044-4132-97c2-559525f3dadc_1600x2299.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!loLZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffceb19b5-1044-4132-97c2-559525f3dadc_1600x2299.jpeg 424w, https://substackcdn.com/image/fetch/$s_!loLZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffceb19b5-1044-4132-97c2-559525f3dadc_1600x2299.jpeg 848w, https://substackcdn.com/image/fetch/$s_!loLZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffceb19b5-1044-4132-97c2-559525f3dadc_1600x2299.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!loLZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffceb19b5-1044-4132-97c2-559525f3dadc_1600x2299.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!loLZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffceb19b5-1044-4132-97c2-559525f3dadc_1600x2299.jpeg" width="277" height="397.99725274725273" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fceb19b5-1044-4132-97c2-559525f3dadc_1600x2299.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:277,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;The Conscious Mind In Search of a Fundamental Theory (Philosophy of Mind) :  Chalmers, David J.: Amazon.co.uk: Books&quot;,&quot;title&quot;:&quot;The Conscious Mind In Search of a Fundamental Theory (Philosophy of Mind) :  Chalmers, David J.: Amazon.co.uk: Books&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="The Conscious Mind In Search of a Fundamental Theory (Philosophy of Mind) :  Chalmers, David J.: Amazon.co.uk: Books" title="The Conscious Mind In Search of a Fundamental Theory (Philosophy of Mind) :  Chalmers, David J.: Amazon.co.uk: Books" srcset="https://substackcdn.com/image/fetch/$s_!loLZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffceb19b5-1044-4132-97c2-559525f3dadc_1600x2299.jpeg 424w, https://substackcdn.com/image/fetch/$s_!loLZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffceb19b5-1044-4132-97c2-559525f3dadc_1600x2299.jpeg 848w, https://substackcdn.com/image/fetch/$s_!loLZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffceb19b5-1044-4132-97c2-559525f3dadc_1600x2299.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!loLZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffceb19b5-1044-4132-97c2-559525f3dadc_1600x2299.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p>The hard problem of consciousness is easy to state: why is there something it is like to be you? </p><p>Neuroscience can explain which brain states correlate with which experiences. But the hard problem of consciousness comes from recognizing that correlation isn&#8217;t identity, as we can presumably have the correlated brain states without the corresponding conscious states. We can have brains without the stuff of experience (qualia). </p><p>In <em><a href="https://www.amazon.com/Conscious-Mind-Search-Fundamental-Philosophy/dp/0195117891">The Conscious Mind</a></em> Chalmers works through this puzzle with famous clarity. The payoff for technologists is striking. It can help readers see how, for all we know, AI could be conscious <em>even if </em>the mind is immaterial.</p><p><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Kevin Vallier&quot;,&quot;id&quot;:9266472,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/00a6225e-c537-43dd-9d01-b559a8dbedf2_460x460.jpeg&quot;,&quot;uuid&quot;:&quot;ac342a66-35b1-4f1a-b728-dfec5fd4e4c3&quot;}" data-component-name="MentionToDOM"></span>,<br></strong><em>Professor of Philosophy &amp; Director of Research at the Institute for American Constitutional Thought and Leadership, University of Toledo.</em></p></blockquote><div><hr></div><h3><strong>10. Language Machines</strong></h3><p><strong>by Leif Weatherby</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6jUr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9baccad6-f788-4e8a-9c47-77f2fdfdc444_646x1000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6jUr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9baccad6-f788-4e8a-9c47-77f2fdfdc444_646x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6jUr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9baccad6-f788-4e8a-9c47-77f2fdfdc444_646x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6jUr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9baccad6-f788-4e8a-9c47-77f2fdfdc444_646x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6jUr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9baccad6-f788-4e8a-9c47-77f2fdfdc444_646x1000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6jUr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9baccad6-f788-4e8a-9c47-77f2fdfdc444_646x1000.jpeg" width="260" height="402.4767801857585" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9baccad6-f788-4e8a-9c47-77f2fdfdc444_646x1000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:646,&quot;resizeWidth&quot;:260,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Language Machines: Cultural AI and the End of Remainder Humanism  (Posthumanities): Amazon.co.uk: Weatherby, Leif: 9781517919313: Books&quot;,&quot;title&quot;:&quot;Language Machines: Cultural AI and the End of Remainder Humanism  (Posthumanities): Amazon.co.uk: Weatherby, Leif: 9781517919313: Books&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Language Machines: Cultural AI and the End of Remainder Humanism  (Posthumanities): Amazon.co.uk: Weatherby, Leif: 9781517919313: Books" title="Language Machines: Cultural AI and the End of Remainder Humanism  (Posthumanities): Amazon.co.uk: Weatherby, Leif: 9781517919313: Books" srcset="https://substackcdn.com/image/fetch/$s_!6jUr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9baccad6-f788-4e8a-9c47-77f2fdfdc444_646x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6jUr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9baccad6-f788-4e8a-9c47-77f2fdfdc444_646x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6jUr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9baccad6-f788-4e8a-9c47-77f2fdfdc444_646x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6jUr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9baccad6-f788-4e8a-9c47-77f2fdfdc444_646x1000.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p>Leif Weatherby&#8217;s <em><a href="https://www.amazon.com/Language-Machines-Cultural-Remainder-Posthumanities/dp/1517919320">Language Machines</a></em> argues that AI is not agentic intelligence but the death of the author made concrete. Structuralist theorists like Jakobson and Lacan argued that language was a generative system of signs, which existed independently of the ground truths that it described. Weatherby marries their ideas to those of AI researchers such as Claude Shannon, Walter Pitts, and Warren McCulloch. The conclusion: LLMs are not individual intelligence or anything like it, but language as a system, made capable of speaking.</p><p><em>(Henry also recommended <a href="https://www.amazon.co.uk/Unaccountability-Machine-Systems-Terrible-Decisions/dp/1788169557/">The Unaccountability Machine</a> by Dan Davies)</em></p><p><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Henry Farrell&quot;,&quot;id&quot;:557668,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!h_nA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee3c2786-85cb-4bbe-bbb9-acc7812d95f6_1279x721.png&quot;,&quot;uuid&quot;:&quot;5dbd8ca0-9fde-41da-bcb4-98b9a6e096d8&quot;}" data-component-name="MentionToDOM"></span>,<br></strong><em>SNF Agora Professor of International Affairs at Johns Hopkins University. Writer at <a href="https://www.programmablemutter.com/">Programmable Mutter</a>.</em></p></blockquote><div><hr></div><h3><strong>11. The Beginning of Infinity</strong></h3><p><strong>by David Deutsch</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HPjV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02689e10-54a0-4098-9a4e-8681cb34cad8_1524x2339.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HPjV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02689e10-54a0-4098-9a4e-8681cb34cad8_1524x2339.jpeg 424w, https://substackcdn.com/image/fetch/$s_!HPjV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02689e10-54a0-4098-9a4e-8681cb34cad8_1524x2339.jpeg 848w, https://substackcdn.com/image/fetch/$s_!HPjV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02689e10-54a0-4098-9a4e-8681cb34cad8_1524x2339.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!HPjV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02689e10-54a0-4098-9a4e-8681cb34cad8_1524x2339.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HPjV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02689e10-54a0-4098-9a4e-8681cb34cad8_1524x2339.jpeg" width="261" height="400.6421703296703" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/02689e10-54a0-4098-9a4e-8681cb34cad8_1524x2339.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2235,&quot;width&quot;:1456,&quot;resizeWidth&quot;:261,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;The Beginning of Infinity: Explanations That Transform the World:  Amazon.co.uk: Deutsch, David: 9780140278163: Books&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="The Beginning of Infinity: Explanations That Transform the World:  Amazon.co.uk: Deutsch, David: 9780140278163: Books" title="The Beginning of Infinity: Explanations That Transform the World:  Amazon.co.uk: Deutsch, David: 9780140278163: Books" srcset="https://substackcdn.com/image/fetch/$s_!HPjV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02689e10-54a0-4098-9a4e-8681cb34cad8_1524x2339.jpeg 424w, https://substackcdn.com/image/fetch/$s_!HPjV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02689e10-54a0-4098-9a4e-8681cb34cad8_1524x2339.jpeg 848w, https://substackcdn.com/image/fetch/$s_!HPjV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02689e10-54a0-4098-9a4e-8681cb34cad8_1524x2339.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!HPjV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02689e10-54a0-4098-9a4e-8681cb34cad8_1524x2339.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p><em><a href="https://www.amazon.com/Beginning-Infinity-Explanations-Transform-World/dp/0143121359">The Beginning of Infinity</a></em> reshaped how I think about progress, knowledge, and what&#8217;s actually possible. He makes optimism rigorous. </p><p>Deutsch argues that knowledge creation can continually solve problems and transcend apparent limits, challenging us to reject fatalism about constraints we mistake for destiny.</p><p>Progress isn&#8217;t guaranteed, but it&#8217;s achievable. What drives it? Conjecture, criticism, and relentless error-correction, not certainty or dogma. The idea that humans are &#8220;universal explainers&#8221; capable of understanding anything in principle feels both humbling and empowering; we&#8217;re limited only by what we haven&#8217;t yet figured out.</p><p>Yet his most useful insight centres on explanation itself: good explanations are hard to vary while still accounting for what we observe, like a key cut to fit one lock. Change one element and the explanation collapses. The clarity of distinguishing real understanding from mere description or correlation matters because it shows us where to focus. We&#8217;re building knowledge that compounds.</p><p><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Azeem Azhar&quot;,&quot;id&quot;:710379,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/09961c12-4209-4296-8a12-0762a41809a3_400x400.jpeg&quot;,&quot;uuid&quot;:&quot;4887af7f-fd7d-48ce-b210-f22dd0e58a30&quot;}" data-component-name="MentionToDOM"></span>,<br></strong><em>Founder of <a href="https://www.exponentialview.co/">Exponential View</a> and investor, author of <a href="https://www.amazon.co.uk/Exponential-Age-Digital-Revolution-Rewire/dp/1635769094">The Exponential Age</a>.</em></p></blockquote><div><hr></div><h3><strong>12. How the World Became Rich</strong></h3><p><strong>by Mark Koyama and Jared Rubin</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Lvq-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd68c1ee-524e-47cc-8cb1-e714f73ddb33_660x1000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Lvq-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd68c1ee-524e-47cc-8cb1-e714f73ddb33_660x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Lvq-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd68c1ee-524e-47cc-8cb1-e714f73ddb33_660x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Lvq-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd68c1ee-524e-47cc-8cb1-e714f73ddb33_660x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Lvq-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd68c1ee-524e-47cc-8cb1-e714f73ddb33_660x1000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Lvq-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd68c1ee-524e-47cc-8cb1-e714f73ddb33_660x1000.jpeg" width="264" height="400" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fd68c1ee-524e-47cc-8cb1-e714f73ddb33_660x1000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:660,&quot;resizeWidth&quot;:264,&quot;bytes&quot;:58418,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.cosmos-institute.org/i/180976600?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd68c1ee-524e-47cc-8cb1-e714f73ddb33_660x1000.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Lvq-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd68c1ee-524e-47cc-8cb1-e714f73ddb33_660x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Lvq-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd68c1ee-524e-47cc-8cb1-e714f73ddb33_660x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Lvq-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd68c1ee-524e-47cc-8cb1-e714f73ddb33_660x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Lvq-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd68c1ee-524e-47cc-8cb1-e714f73ddb33_660x1000.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p>The tech world has rediscovered big questions about growth: why did the industrial revolution happen where it did, what role did states play, and why do some transformations stick while others don&#8217;t? </p><p><em><a href="https://www.amazon.co.uk/How-World-Became-Rich-Historical/dp/1509540237">How The World Became Rich</a></em> explores the competing explanations without forcing a single answer &#8212; a useful corrective for anyone whose model of progress starts and ends in the lab, or the state.</p><p><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;S&#233;b Krier&quot;,&quot;id&quot;:837581,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!1Occ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F7e226c3a-6a49-454a-94e5-c1eb6777ea57_400x400.jpeg&quot;,&quot;uuid&quot;:&quot;c1083b90-40db-4278-aa8c-7cb1c3312653&quot;}" data-component-name="MentionToDOM"></span>,<br></strong><em>Frontier Policy Development Lead at Google DeepMind.</em></p></blockquote><div><hr></div><h3><strong>Bonus: The Turing Test Argument</strong></h3><p><strong>by Bernardo Gon&#231;alves</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7zbF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d7bfd25-be44-41c7-a8c5-2ce476342b5c_650x1000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7zbF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d7bfd25-be44-41c7-a8c5-2ce476342b5c_650x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7zbF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d7bfd25-be44-41c7-a8c5-2ce476342b5c_650x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7zbF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d7bfd25-be44-41c7-a8c5-2ce476342b5c_650x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7zbF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d7bfd25-be44-41c7-a8c5-2ce476342b5c_650x1000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7zbF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d7bfd25-be44-41c7-a8c5-2ce476342b5c_650x1000.jpeg" width="260" height="400" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2d7bfd25-be44-41c7-a8c5-2ce476342b5c_650x1000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:650,&quot;resizeWidth&quot;:260,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;The Turing Test Argument - Gon&#231;alves, Bernardo | 9781032291574 |  Amazon.com.au | Books&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="The Turing Test Argument - Gon&#231;alves, Bernardo | 9781032291574 |  Amazon.com.au | Books" title="The Turing Test Argument - Gon&#231;alves, Bernardo | 9781032291574 |  Amazon.com.au | Books" srcset="https://substackcdn.com/image/fetch/$s_!7zbF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d7bfd25-be44-41c7-a8c5-2ce476342b5c_650x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7zbF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d7bfd25-be44-41c7-a8c5-2ce476342b5c_650x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7zbF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d7bfd25-be44-41c7-a8c5-2ce476342b5c_650x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7zbF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d7bfd25-be44-41c7-a8c5-2ce476342b5c_650x1000.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p>In <em><a href="https://www.amazon.com/Argument-Routledge-Studies-Twentieth-Century-Philosophy-ebook/dp/B0CNYCX2YK">The Turing Test Argument</a></em> Gon&#231;alves explores one of the most well known, and most misunderstood, episodes in the history and philosophy of AI: Alan Turing&#8217;s imitation game. </p><p>This little book argues that the test was a response to swirling controversy surrounding Turing&#8217;s debates with physicist Douglas Hartree, chemist and philosopher Michael Polanyi, and neurosurgeon Geoffrey Jefferson. </p><p>Gon&#231;alves thinks that Turing&#8217;s focus on learning and adaptability sought to counter Hartree&#8217;s view of computers as calculation engines, that tasks like the composition of poetry aimed to address Jefferson&#8217;s demands for creative abilities, and that the introduction of open-ended conversation (rather than the use of rule-based games like chess) was designed to confront Polanyi&#8217;s concerns that human knowledge couldn&#8217;t be formalised by machines. </p><p>It&#8217;s a short but essential read for anyone interested in deeply understanding a moment that never seems to stop making headlines!</p><p><strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Harry Law&quot;,&quot;id&quot;:10612241,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!yasj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b1f870a-3e2e-47c4-b05f-d7a69b3c58e7_1728x1728.jpeg&quot;,&quot;uuid&quot;:&quot;8aa88649-664a-4b55-9376-87807a3db50d&quot;}" data-component-name="MentionToDOM"></span>,<br></strong><em>Cosmos Editorial Lead, former DeepMind Policy Research, and Cambridge University Researcher.</em></p></blockquote><div><hr></div><p>Thanks to Virginia, Matt, Greg, Joel, Lisa, Harvey, Rudolf, Caitlin, Kevin, Henry, Azeem, S&#233;b, and Harry for contributing to this list.<br><br>And let us know your book recommendations for the holidays in the comments!</p><div><hr></div><p><em><a href="https://cosmos-institute.org/">Cosmos Institute</a> is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund fast prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.cosmos-institute.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for essays and updates</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>