Assuming frontier large language models, together with their multimodal and agentic extensions, are trained to effective saturation on an exhaustive corpus that represents the totality of digitized human knowledge including all scientific publications, books, patents, archival records, cultural artifacts, and recorded conversations, will these systems be capable of transcending the statistical manifold of their training distribution to autonomously discover, validate, and iteratively expand novel knowledge beyond the current human frontier?
More precisely, through architectures enabling iterative self-refinement, tool-augmented agentic workflows, formal verification frameworks (e.g., Lean theorem provers or physics/chemistry simulators), multi-agent scientific collaboration, and scalable inference-time compute (e.g., test-time reasoning chains or reinforcement learning from verifiable rewards), can such systems generate original hypotheses, mathematical proofs, experimental designs, or empirical insights that were previously unknown to humanity? Or, conversely, will inherent architectural and data constraints such as interpolation within the training distribution, model collapse under recursive synthetic data, solver cause capabilities to plateau at or near the limits of extant human knowledge?
How can we motivate our children to learn at school? Should we try to Motivate them or find rather a way out of the system? (eg reading more classical books than encouraging them to read what school nowadays gives?)
Hi Brendan, congrats on 20k and thanks for taking questions!
You argue that philosopher-builders need explicit moral commitments to avoid optimizing for the wrong things. But your three pillars (truth-seeking, autonomy, decentralization) are themselves a normative framework that not everyone shares. China's AI strategy is still coherent, explicit, and philosophical, it just starts from different premises. So how do you argue for your philosophy without just replacing one set of defaults with another? What makes Cosmos's values the right foundation rather than just a well-packaged preference?
Most of the current work on ‘AI, collective epistemic structures and decision-making’ focuses on filling gaps: more participants, faster information exchange, more efficient decision-making. This will help with many problems, but certainly not with the most complex ones, because it just accelerates the practical execution of the same thought styles that led to the problems. Therefore: How can we use future AI to foster new thought styles that are currently not supported by our existing social structures?
As we approach AGI, how will we know if and when it is sentient? And how will we react to a real sentient non-human in our midst? This is a much more likely scenario than ET's contacting us.
If science is about growing and learning why does the community gatekeep. I have repeatedly tried to get my paper read. And because I cant afford to publish the damn thing my hypothesis gets buried. Seems that times still haven't changed. Its still as stupid as it was for copernicus.
Assuming frontier large language models, together with their multimodal and agentic extensions, are trained to effective saturation on an exhaustive corpus that represents the totality of digitized human knowledge including all scientific publications, books, patents, archival records, cultural artifacts, and recorded conversations, will these systems be capable of transcending the statistical manifold of their training distribution to autonomously discover, validate, and iteratively expand novel knowledge beyond the current human frontier?
More precisely, through architectures enabling iterative self-refinement, tool-augmented agentic workflows, formal verification frameworks (e.g., Lean theorem provers or physics/chemistry simulators), multi-agent scientific collaboration, and scalable inference-time compute (e.g., test-time reasoning chains or reinforcement learning from verifiable rewards), can such systems generate original hypotheses, mathematical proofs, experimental designs, or empirical insights that were previously unknown to humanity? Or, conversely, will inherent architectural and data constraints such as interpolation within the training distribution, model collapse under recursive synthetic data, solver cause capabilities to plateau at or near the limits of extant human knowledge?
How can we motivate our children to learn at school? Should we try to Motivate them or find rather a way out of the system? (eg reading more classical books than encouraging them to read what school nowadays gives?)
Hi Brendan, congrats on 20k and thanks for taking questions!
You argue that philosopher-builders need explicit moral commitments to avoid optimizing for the wrong things. But your three pillars (truth-seeking, autonomy, decentralization) are themselves a normative framework that not everyone shares. China's AI strategy is still coherent, explicit, and philosophical, it just starts from different premises. So how do you argue for your philosophy without just replacing one set of defaults with another? What makes Cosmos's values the right foundation rather than just a well-packaged preference?
Excited to hear your thoughts!
Do you believe Jesus rose from the dead?
Most of the current work on ‘AI, collective epistemic structures and decision-making’ focuses on filling gaps: more participants, faster information exchange, more efficient decision-making. This will help with many problems, but certainly not with the most complex ones, because it just accelerates the practical execution of the same thought styles that led to the problems. Therefore: How can we use future AI to foster new thought styles that are currently not supported by our existing social structures?
Could we please discuss how my Center intends to integrate all the tools you are building into our AI Fluency course and sandbox?
As we approach AGI, how will we know if and when it is sentient? And how will we react to a real sentient non-human in our midst? This is a much more likely scenario than ET's contacting us.
If science is about growing and learning why does the community gatekeep. I have repeatedly tried to get my paper read. And because I cant afford to publish the damn thing my hypothesis gets buried. Seems that times still haven't changed. Its still as stupid as it was for copernicus.
FMK: paul kingsnorth, socrates, Claude?