Discussion about this post

User's avatar
Neural Foundry's avatar

Really appreciate this list bringing together history of science with AI alignment questions. The Braitenberg recommendation particularly caught my attention becasue it gets at something most alignment discussions miss: the gap between complex behavior and actual intentionality. I ran into this alot when designing feedback loops for ML systems where emergent patterns looked "smart" but were just artifacts of the training data. Strevens' iron rule about bracketing aesthetics from evidence feels especially relevant now that we're building systems that can generate both convincingly. Wonder if that clean separation ever actually existed or if it was always more aspirational than real.

Expand full comment

No posts

Ready for more?