It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
Great essay Alex — clearly written, well-balanced, comprehensive. Best piece I’ve seen Cosmos publish so far.
Selfishly, I’d like to see more written about functionalism v biological naturalism. I’m of the view that functionalism is actually under-specified and when fully detailed, collapses into substrate dependence. If metabolism is essential to intelligence and metabolism involves the self-preserving transformation of matter, the chemical details are going to probably matter too.
Using AI to help with some home design issues, I received a response that I was requesting images too fast, and to be fair to all, I needed to wait my turn. The hint of Christian guilt was intriguing, so I told it to stop using a Judeo-Christian framework. I then received the response with a sort of postmodern rhetoric. It is at that point that I realized what philosophy will AI be guided by? Will it be like Bentham's Utilitarianism or Marxism? I asked it to frame the necessity of waiting for images from a Hegel versus a Kant perspective. It ended up with a table of Aristotle, Heidegger, and others at the end. How will this all play out? Will it create its own meta-ethics?
I think it was another Cosmos contributor who posed the concern of function being a deciding factor as we consider those who have Down syndrome to be fully human, though their faculties are not.
Whether AI automates existing jobs, creates new ones, or makes workers more productive will depend on policy choices (e.g. the US taxes labor more heavily than capital, which encourages firms to replace workers.) What a great insight. In Santa Ana, CA city council voted to require more checkers and less self check out. I can't help wonder if a better solution would be a tax incentive to reduce the cost of hiring labor and increased the cost of capital.
The closing observation is the wedge. The correlations you notice between functionalism and LLM-discovery optimism, between biological naturalism and scaling skepticism, between precaution and orthogonality, are not temperamental. They are architectural. All five axes share one unexamined assumption, and once it is named, the bivariate framing on each axis stops looking like a disagreement and starts looking like a symptom.
The shared assumption is that the system class under discussion is trained-weight pattern-matching at scale. Grant it, and your answers across the five axes are nearly determined.
The way out of the framing is not to pick a side on each axis. It is to specify constitutive conditions that produce verdicts regardless of which side. Three conditions, ontology-agnostic. Does the system carry its own temporal history? Does it maintain its own structural form under perturbation? Does it bear its own operational consequences? These are testable. They commit to neither functionalism nor biological naturalism, neither orthogonality nor alignment-by-default, neither precaution nor adaptive experimentation. They sit one layer below your five axes and they decide each axis without engaging it.
Run them across your five disagreements.
On consciousness, the functionalist and the biological naturalist are arguing about whether the substrate matters. The conditions ask whether the system carries its own temporal history, maintains its own structural form, and bears its own operational consequences. Current LLMs fail all three by construction (fresh context windows, static weights during inference, no feedback loop from action to architecture). The substrate question never has to be asked. The conditions return negative on the system class actually deployed, regardless of whether silicon "could in principle" be conscious.
On governance, the precaution-versus-adaptive debate is downstream of condition three. A system that does not bear its own operational consequences requires external governance by construction; a system that does bear them carries internal governance by construction. The argument shifts from "should we regulate scaling" to "what governs the system from inside, and how is that audited."
On alignment, conditions one and two settle the orthogonality framing. Values in a system that carries its own temporal history and maintains structural form under perturbation are not external payload, BECAUSE they are structurally entangled with the operators that constitute the system. Orthogonality is true for systems lacking those conditions. Alignment-by-default approximates truth for systems whose pattern-absorbed values weakly satisfy condition one. The conditions specify when each side is right.
On discovery, condition one is the Deutsch test stated structurally. A system that carries its own temporal history can pose conjectures whose lineage is traceable. A system without it can only interpolate. Deutsch-versus-Amodei resolves on whether the system satisfies condition one, not on whether it scales further within the same architecture.
On labor, condition three is the discriminant. A system that bears its own operational consequences operationalizes work in a way that is neither pure replacement (the human-in-the-loop verification augmentation requires is closed by construction) nor pure augmentation (the operationalization closes loops augmentation leaves open). The replace/augment debate is about LLM deployment. The conditions describe a deployment class the debate does not contain.
So your closing line is precise and I want to push it one click. Each side in each of your five debates is seeing what its architectural commitment lets it see, BECAUSE the commitment determines what counts as a structure to recognize and what counts as sand. The correlation across axes is the same eye looking at five different scenes. The conditions sit beneath the eye. They do not adjudicate the debates by picking winners. They retire the framing by producing verdicts the framing was not built to produce.
The work my own group is doing sits at the level of the conditions, not at the level of the five axes. None of the ten positions you describe captures it, and that is not a complaint about the map. The map is accurate to the territory it covers. The conditions specify a territory the map does not yet contain, and that territory is being built. Whether it earns the bounds it claims is an empirical question with falsifiers on the public record.
Your closer about castles and sand is structural realism applied to discourse itself. That is the move philosophy is going to converge on whether the AI debate notices or not.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
Great essay Alex — clearly written, well-balanced, comprehensive. Best piece I’ve seen Cosmos publish so far.
Selfishly, I’d like to see more written about functionalism v biological naturalism. I’m of the view that functionalism is actually under-specified and when fully detailed, collapses into substrate dependence. If metabolism is essential to intelligence and metabolism involves the self-preserving transformation of matter, the chemical details are going to probably matter too.
Using AI to help with some home design issues, I received a response that I was requesting images too fast, and to be fair to all, I needed to wait my turn. The hint of Christian guilt was intriguing, so I told it to stop using a Judeo-Christian framework. I then received the response with a sort of postmodern rhetoric. It is at that point that I realized what philosophy will AI be guided by? Will it be like Bentham's Utilitarianism or Marxism? I asked it to frame the necessity of waiting for images from a Hegel versus a Kant perspective. It ended up with a table of Aristotle, Heidegger, and others at the end. How will this all play out? Will it create its own meta-ethics?
I think it was another Cosmos contributor who posed the concern of function being a deciding factor as we consider those who have Down syndrome to be fully human, though their faculties are not.
Whether AI automates existing jobs, creates new ones, or makes workers more productive will depend on policy choices (e.g. the US taxes labor more heavily than capital, which encourages firms to replace workers.) What a great insight. In Santa Ana, CA city council voted to require more checkers and less self check out. I can't help wonder if a better solution would be a tax incentive to reduce the cost of hiring labor and increased the cost of capital.
The closing observation is the wedge. The correlations you notice between functionalism and LLM-discovery optimism, between biological naturalism and scaling skepticism, between precaution and orthogonality, are not temperamental. They are architectural. All five axes share one unexamined assumption, and once it is named, the bivariate framing on each axis stops looking like a disagreement and starts looking like a symptom.
The shared assumption is that the system class under discussion is trained-weight pattern-matching at scale. Grant it, and your answers across the five axes are nearly determined.
The way out of the framing is not to pick a side on each axis. It is to specify constitutive conditions that produce verdicts regardless of which side. Three conditions, ontology-agnostic. Does the system carry its own temporal history? Does it maintain its own structural form under perturbation? Does it bear its own operational consequences? These are testable. They commit to neither functionalism nor biological naturalism, neither orthogonality nor alignment-by-default, neither precaution nor adaptive experimentation. They sit one layer below your five axes and they decide each axis without engaging it.
Run them across your five disagreements.
On consciousness, the functionalist and the biological naturalist are arguing about whether the substrate matters. The conditions ask whether the system carries its own temporal history, maintains its own structural form, and bears its own operational consequences. Current LLMs fail all three by construction (fresh context windows, static weights during inference, no feedback loop from action to architecture). The substrate question never has to be asked. The conditions return negative on the system class actually deployed, regardless of whether silicon "could in principle" be conscious.
On governance, the precaution-versus-adaptive debate is downstream of condition three. A system that does not bear its own operational consequences requires external governance by construction; a system that does bear them carries internal governance by construction. The argument shifts from "should we regulate scaling" to "what governs the system from inside, and how is that audited."
On alignment, conditions one and two settle the orthogonality framing. Values in a system that carries its own temporal history and maintains structural form under perturbation are not external payload, BECAUSE they are structurally entangled with the operators that constitute the system. Orthogonality is true for systems lacking those conditions. Alignment-by-default approximates truth for systems whose pattern-absorbed values weakly satisfy condition one. The conditions specify when each side is right.
On discovery, condition one is the Deutsch test stated structurally. A system that carries its own temporal history can pose conjectures whose lineage is traceable. A system without it can only interpolate. Deutsch-versus-Amodei resolves on whether the system satisfies condition one, not on whether it scales further within the same architecture.
On labor, condition three is the discriminant. A system that bears its own operational consequences operationalizes work in a way that is neither pure replacement (the human-in-the-loop verification augmentation requires is closed by construction) nor pure augmentation (the operationalization closes loops augmentation leaves open). The replace/augment debate is about LLM deployment. The conditions describe a deployment class the debate does not contain.
So your closing line is precise and I want to push it one click. Each side in each of your five debates is seeing what its architectural commitment lets it see, BECAUSE the commitment determines what counts as a structure to recognize and what counts as sand. The correlation across axes is the same eye looking at five different scenes. The conditions sit beneath the eye. They do not adjudicate the debates by picking winners. They retire the framing by producing verdicts the framing was not built to produce.
The work my own group is doing sits at the level of the conditions, not at the level of the five axes. None of the ten positions you describe captures it, and that is not a complaint about the map. The map is accurate to the territory it covers. The conditions specify a territory the map does not yet contain, and that territory is being built. Whether it earns the bounds it claims is an empirical question with falsifiers on the public record.
Your closer about castles and sand is structural realism applied to discourse itself. That is the move philosophy is going to converge on whether the AI debate notices or not.