I may have never seen a 1500-sided symmetric die landing ever in my life, but it is possible to have a model of the world which spits out 1/1500 chance of it landing on a specific side. Would this count as bad epistemic tool too?
A 1500-sided die might turn out to effectively just be a sphere; the precise manufacture of the die, the conditions in which it rolls and the means by which a result is recorded all change. Neat probabilistic reasoning might not scale into the physical world in practice.
Which is exactly the danger with quantifying reason under uncertainty! It’s not that you can’t spit out a number given priors— it’s that the priors in and of itself might be blind to certain information, sometimes in a way which was baked into the premise you had in the first place
Do you have a better guess? If you don't have a model for out-of the model error (do errors of my model make the outcome more or less likely?) then 1/1500 is still the best guess. Sure, priors can be blind to certain information, and this blindness is baked in the premises. But do you in fact have an objection to the premises or do you just say "well you might be wrong, right???"
The piece is strongest when it reframes the debate as a distinction between risk and Knightian uncertainty and insists that AI safety research should focus on falsifiable mechanisms rather than grand numerical predictions.
"This is the gap OCO was built to fill. Not P(doom) the falsifiable proximate data that makes P(doom) a less necessary fiction. Contested signal. Longitudinal. Consequence bearing. The map of where humans actually converge and where they genuinely don't after rigorous filtration. Worth a conversation?"
I may have never seen a 1500-sided symmetric die landing ever in my life, but it is possible to have a model of the world which spits out 1/1500 chance of it landing on a specific side. Would this count as bad epistemic tool too?
Well… arguably yes.
A 1500-sided die might turn out to effectively just be a sphere; the precise manufacture of the die, the conditions in which it rolls and the means by which a result is recorded all change. Neat probabilistic reasoning might not scale into the physical world in practice.
Which is exactly the danger with quantifying reason under uncertainty! It’s not that you can’t spit out a number given priors— it’s that the priors in and of itself might be blind to certain information, sometimes in a way which was baked into the premise you had in the first place
Do you have a better guess? If you don't have a model for out-of the model error (do errors of my model make the outcome more or less likely?) then 1/1500 is still the best guess. Sure, priors can be blind to certain information, and this blindness is baked in the premises. But do you in fact have an objection to the premises or do you just say "well you might be wrong, right???"
The piece is strongest when it reframes the debate as a distinction between risk and Knightian uncertainty and insists that AI safety research should focus on falsifiable mechanisms rather than grand numerical predictions.
Brilliant
"This is the gap OCO was built to fill. Not P(doom) the falsifiable proximate data that makes P(doom) a less necessary fiction. Contested signal. Longitudinal. Consequence bearing. The map of where humans actually converge and where they genuinely don't after rigorous filtration. Worth a conversation?"