I appreciate the holistic view reflected in this article, it is an area of deep research and development for me. I find that the algorithmic mechanisms referred to are game theoretic more than technological, they are essentially voting algorithms, which both Democracy and social media share in common. LLMs come pre-quipped with this "un-holistic" mechanism design and faulty dialectics. By un holistic I mean internally competitive. So I interpret your article as an appeal to better games, human to human and human to AI.
We can think about this through cognitive psychology as a matter of whether technologies afford empowerment-motivated causal learning. Pure reinforcement learning favors control over variability and results in strategies that optimize before arriving at a deeper understanding. Pure epistemic learning favors high information input and gets stuck in noise traps seeking stimulus. Human learning balances control and variability by seeking to expand its influence over an increasing range of contingencies.
Right now most digital technologies are structured either for low-dimensional optimization or intermittent reward...neither approach cultivates open-ended learning of causal relationships (e.g., social media and chatbots are slot machines whose recommender policies you can't tinker with to explore the search space of possible alternate outputs). Redesigning them for "autonomy" means increasing the dimensionality of available inputs so that you have more choice over what kinds of algorithms you want to employ. Do you want perfect consistency in highly-constrained contexts ("Turn left in 500 meters"), high levels of surprise (a random walk), or the freedom to change the perspectives offered by algorithms as you engage in goal-directed foraging (and, crucially, the freedom to change your goals)?
This is an invitation to others to work together on fixing this "reverse alignment problem"—which has implications everywhere from how to choose the right questions in scientific research to how to make the social web a place that is actually good for human beings.
I appreciate the holistic view reflected in this article, it is an area of deep research and development for me. I find that the algorithmic mechanisms referred to are game theoretic more than technological, they are essentially voting algorithms, which both Democracy and social media share in common. LLMs come pre-quipped with this "un-holistic" mechanism design and faulty dialectics. By un holistic I mean internally competitive. So I interpret your article as an appeal to better games, human to human and human to AI.
How old are you?
https://kevinhaylett.substack.com/p/the-dangers-of-ai-may-not-be-what
We can think about this through cognitive psychology as a matter of whether technologies afford empowerment-motivated causal learning. Pure reinforcement learning favors control over variability and results in strategies that optimize before arriving at a deeper understanding. Pure epistemic learning favors high information input and gets stuck in noise traps seeking stimulus. Human learning balances control and variability by seeking to expand its influence over an increasing range of contingencies.
Right now most digital technologies are structured either for low-dimensional optimization or intermittent reward...neither approach cultivates open-ended learning of causal relationships (e.g., social media and chatbots are slot machines whose recommender policies you can't tinker with to explore the search space of possible alternate outputs). Redesigning them for "autonomy" means increasing the dimensionality of available inputs so that you have more choice over what kinds of algorithms you want to employ. Do you want perfect consistency in highly-constrained contexts ("Turn left in 500 meters"), high levels of surprise (a random walk), or the freedom to change the perspectives offered by algorithms as you engage in goal-directed foraging (and, crucially, the freedom to change your goals)?
This is an invitation to others to work together on fixing this "reverse alignment problem"—which has implications everywhere from how to choose the right questions in scientific research to how to make the social web a place that is actually good for human beings.
https://michaelgarfield.substack.com/p/foraging