What Will You Build For: Rune Kvist
Anthropic's first commercial hire now wants to underwrite superintelligence.
Every builder’s first duty is philosophical: to decide what they should build for. This series interviews founders who are building towards their vision of the human good.
Today’s guest is Rune Kvist. Rune is co-founder and CEO at The Artificial Intelligence Underwriting Company (AIUC), which is backed by Cosmos Ventures.
Prior to founding AIUC, Rune was the first product/commercial hire at Anthropic. He is a graduate of the Philosophy, Politics, and Economics program at the University of Oxford.
1. What are the core questions or beliefs driving your work?
I work on underwriting frontier AI systems through insurance and standards.
My core beliefs include:
AI is a very, very, very big deal.
AI progress and security are mutually supportive, not opposing forces. Security creates confidence that drives adoption and funds progress.
The pace of AI progress will undermine state capacity for dealing with hard questions around AI. It’s an open question whether the EU AI Act will take effect before we reach superhuman AI.
Market mechanisms, construed properly, can create incentives for secure and fast AI progress. In short: it’s in our own hands.
History has many governance lessons. From Benjamin Franklin’s fire insurance company that successfully reduced wildfires, to car insurers using crash tests to make driving safer, to the credit risk agency Moody’s showing in 2008 how private companies can fail at governance.
2. What future are you building for?
We are building for a future where humans deploy AI - systems, agents, robots - with confidence.
This is a future where risks become sufficiently legible, manageable and insurable that we in fact deploy AI tutors to every child, AI doctors to every patient, AI financial advisors to every pensioner.
It’s a future where humanity is at the steering wheel.
3. What commonly held belief in the tech community do you believe is wrong?
The doomer versus accelerationist dichotomy.
Progress requires security. An accident could cause catastrophic damage and threaten America’s lead in AI. Nuclear power’s promise of abundant energy died for a generation after Three Mile Island and Chernobyl. The same will happen if AI causes major harm—courts and voters will shut AI progress down.
The commonly held belief that “moving fast” means cutting corners on safety is backward. In reality, security powers progress. ChatGPT was created using RLHF, an alignment technique that made systems more steerable—and thus more useful and adoptable. Banks and hospitals won’t deploy agents at scale without trustworthy assurance and insurance. The infrastructure for confident adoption is what enables velocity. We don’t need to choose between speed and security.
“Progress requires security… Nuclear power’s promise of abundant energy died for a generation after Three Mile Island and Chernobyl.”
4. What are your main philosophical influences?
My worldview is influenced by thinkers who fleshed out how markets can solve coordination problems.
Benjamin Franklin created the practical blueprint for private governance. When Philadelphia kept burning down, he started America’s first fire insurance company in 1752. Insurance created financial incentives to invest in fire prevention. Out of that grew building codes and formal fire inspections. This blueprint of insurance, standards and audits shows up again and again throughout history.
Friedrich Hayek showed how distributed knowledge and pricing signals adopt faster to change than bureaucratic planning. Standards and insurance markets process information about risk in real-time. Regulation freezes yesterday’s knowledge into law which is why these are especially needed for AI given the pace of innovation.
Florence Nightingale showed how creating visibility into root causes for societal challenges can shape our institutions. Her methods made sanitation issues clear, and helped create modern nursing. We hope pushing the frontier of understanding AI agents, and using price mechanisms to create common knowledge about risks, can dramatically shape AI progress.
5. What does human flourishing mean to you?
Authentic relationships and authentic work.
We should protect our capacity to get hurt, to try and fail, to grasp the world and make authentic choices, to deliberate on what the good life looks like for each of us.
6. What’s one book you’ve read recently that you’d recommend?
Against the Gods - The Remarkable Story of Risk by Peter L. Bernstein
It’s worth studying the history of how humanity has tamed risks in the past, all the way back from when the storms were thought be the wrath of gods and through to modern financial infrastructure for dealing with risk.
7. What’s your most irrational belief?
That I need to create something extraordinarily valuable with my work to be loved.
A part of me believes it deeply, and it drives a lot of my actions - and yet, I can look around and see for myself that it’s not true.
8. What’s the most interesting tab you have open right now?
Wikipedia on the Price-Anderson Act, which shows how nuclear insurance combined private coverage ($16B) with government backstops for catastrophic scenarios.
It’s the blueprint for how insurance can enable high-risk, high-value technology.

9. Who is one writer or thinker today who you think is underrated?
Gillian Hadfield. She’s a professor at Berkeley studying AI governance and institutional design, but her work deserves far wider attention in both tech and policy circles.
Her core insight is that effective regulation of AI won’t come from traditional command-and-control bureaucracy, but instead from what she calls “regulatory markets.” These are competitive private systems for setting standards, conducting audits, and pricing risk, with the government providing oversight and liability rules that make the market work.
Thanks to Rune for answering “What Will You Build For?”
To get in touch, find him on X or at AIUC.
Cosmos Institute is the Academy for Philosopher-Builders, technologists building AI for human flourishing. We run fellowships, fund prototypes, and host seminars with institutions like Oxford, Aspen Institute, and Liberty Fund.
If you’re someone who thinks deeply, builds deliberately, and cares about the future AI is shaping—join the Cosmos network.
To nominate someone for “What Will You Build For?” leave a comment below, or send us a DM.








This is will definitely be successful