AMA with Brendan McCord
Cosmos hits 20,000 subscribers. Ask me anything.
This week, we crossed 20,000 subscribers on Substack. Thank you to everyone who has read, shared, and engaged with our work.
We’ve written about Claude Boys to Coasean bargaining to the perils of liberal nudging. Reading the comments has often been as rewarding as writing the posts. To mark the milestone, I’ll be answering your questions on Wednesday April 15.
Drop your question in the comments below and upvote the ones you want answered. I’ll start responding next week and I’ll try to take as many as I can.
There are a few things I’ve been thinking about that we haven’t written about yet. This seems like the right place to start.
Ask about Cosmos, human autonomy, AI x philosophy, or what people in our network are building. Especially questions that are hard, that relate to how we approach AI as builders, or that challenge our assumptions.
- Brendan



Assuming frontier large language models, together with their multimodal and agentic extensions, are trained to effective saturation on an exhaustive corpus that represents the totality of digitized human knowledge including all scientific publications, books, patents, archival records, cultural artifacts, and recorded conversations, will these systems be capable of transcending the statistical manifold of their training distribution to autonomously discover, validate, and iteratively expand novel knowledge beyond the current human frontier?
More precisely, through architectures enabling iterative self-refinement, tool-augmented agentic workflows, formal verification frameworks (e.g., Lean theorem provers or physics/chemistry simulators), multi-agent scientific collaboration, and scalable inference-time compute (e.g., test-time reasoning chains or reinforcement learning from verifiable rewards), can such systems generate original hypotheses, mathematical proofs, experimental designs, or empirical insights that were previously unknown to humanity? Or, conversely, will inherent architectural and data constraints such as interpolation within the training distribution, model collapse under recursive synthetic data, solver cause capabilities to plateau at or near the limits of extant human knowledge?
How can we motivate our children to learn at school? Should we try to Motivate them or find rather a way out of the system? (eg reading more classical books than encouraging them to read what school nowadays gives?)