My father and an older friend keep pushing me to just use AI. You will become addicted is the cheerful nudging. No, thank you. The last vestige of my freedom is in my mind, where I try and make sense of the world. I've used legal databases for decades to research and write, so I am not a Luddite. But I am still in charge. The errors or wins in persuasion are mine. It is like going to the gym, lifting weights. Building muscle with weights cannot be outsourced, at least not yet.
There is another ethical issue of the proprietary theft of information. A young middle-aged tech bro was on a panel discussing AI. With a cheerful smile, he admonished everyone to just enjoy the ride - sort of like the captain on a plane before takeoff. An older author of books wisely pointed out that what he was asking for was the acceptance of tech companies gaining massive profits off the labor of others, for free. As Heidegger, and others, warned 70 + years ago, we are the natural resource being consumed.
Very true. You run inside on a machine and watch a video of doing it outside. As an interesting side note, a professor specializing in health and aging (can't remember her name) did a study of two groups of professional housekeepers. One group was told that everything they did was a form of exercising; the other group was a control group instructed to do things as normal. The former group lost weight and gained muscle mass while the latter stayed the same.
Love the metaphor of ‘civilizational model collapse’. Marcuse warned that technology and mass media was creating “one-dimensional” people. AI is supercharging that trend.
This feels exactly right as a diagnosis of epistemic drift — but the deeper rupture is no longer epistemic at all.
What breaks first may not be truth-tracking, but custodianship. Judgment still appears to occur, conclusions still circulate, and responsibility is still socially assigned — yet there is no longer a stable locus that can be held to account for the judgment itself.
At that point, the problem isn’t that humans stop judging, but that judgment persists without an agent who can stand behind it. Verification fails not only because reality drops out of the loop, but because no one remains answerable when it does.
The risk, then, is less 'model collapse' than the normalization of action, belief, and consequence without ownership — a condition that looks epistemic on the surface, but is structurally about agency.
I think and observe my toddler a lot. He discovers the world through embodied knowledge practices. Tasting, touching, grappling, etc.
As adults, our knowledge acquisition practices do rely more on abstract, propositional text. But much of what we do is experiential and participatory. Away from reductive views of knowledge and more towards holistic, embodied views. I think this allows us to resist the slop.
AI is a new technology, supercharged, yes, but simply a new technology. When aggregating the answer retrieved by Google when I ask it a question, I check the links provided. I check the source material and the sources used by those sources. Sometimes I don't because I am being lazy and already know the answer, but don't want to spend time looking through the tomes in my library for the minute details.
Books. Books are a form of technology. So are magazines, documentaries, Fox News, PBS, Britannica, Wikipedia, Michael Moore, etc. Darwin checked Listler's geological findings. We check AI.
The most important skill, besides reading and arithmetic, taught in schools is critical thinking, parsing fact from fiction, lie from truth, observation, conjecture, hypothesis, theory, analysis, and evaluations.
YES. I heartily agree that humans are too willing to hand the keys of the car of life over to AI; won't be long before AI will have "done" more living than most humans, and it will be telling humans about what it's like to really live. Here's to being one of the humans who refuses to stop touching reality, real life, having embodied experiences, and wrestling with information to know what I really think.
This made me think, if we cannot stop the AI train, then maybe we should flip the existing epistemic infrastructure.
If AI can produce knowledge regarded as truth in seconds or minutes, what if we built new mechanisms that ensures this knowledge follows a process that forces it to go head-to-head with reality, or rather real human experience?
If the knowledge passes the reality test, proves that it can anchor to something outside itself, itself being AI, then and only then does it become truth.
Essentially, a kind of bridge between AI content and reality...anyway just a thought, because if I am being honest, I don't even know how that would work exactly.
Exactly right. We are the inheritors of an astounding civilization built over millennia. Let's be thoughtful stewards and not amuse ourselves to idiocracy...
Spot on. AI is going to test the entire human conception of critical thought and intellect.
In a good way. Humans never got around to figuring it out
My father and an older friend keep pushing me to just use AI. You will become addicted is the cheerful nudging. No, thank you. The last vestige of my freedom is in my mind, where I try and make sense of the world. I've used legal databases for decades to research and write, so I am not a Luddite. But I am still in charge. The errors or wins in persuasion are mine. It is like going to the gym, lifting weights. Building muscle with weights cannot be outsourced, at least not yet.
There is another ethical issue of the proprietary theft of information. A young middle-aged tech bro was on a panel discussing AI. With a cheerful smile, he admonished everyone to just enjoy the ride - sort of like the captain on a plane before takeoff. An older author of books wisely pointed out that what he was asking for was the acceptance of tech companies gaining massive profits off the labor of others, for free. As Heidegger, and others, warned 70 + years ago, we are the natural resource being consumed.
The workout example is telling. People used to get exercise as a matter of daily living. Now they have to dedicate time and attention to it.
Very true. You run inside on a machine and watch a video of doing it outside. As an interesting side note, a professor specializing in health and aging (can't remember her name) did a study of two groups of professional housekeepers. One group was told that everything they did was a form of exercising; the other group was a control group instructed to do things as normal. The former group lost weight and gained muscle mass while the latter stayed the same.
Love the metaphor of ‘civilizational model collapse’. Marcuse warned that technology and mass media was creating “one-dimensional” people. AI is supercharging that trend.
This feels exactly right as a diagnosis of epistemic drift — but the deeper rupture is no longer epistemic at all.
What breaks first may not be truth-tracking, but custodianship. Judgment still appears to occur, conclusions still circulate, and responsibility is still socially assigned — yet there is no longer a stable locus that can be held to account for the judgment itself.
At that point, the problem isn’t that humans stop judging, but that judgment persists without an agent who can stand behind it. Verification fails not only because reality drops out of the loop, but because no one remains answerable when it does.
The risk, then, is less 'model collapse' than the normalization of action, belief, and consequence without ownership — a condition that looks epistemic on the surface, but is structurally about agency.
I think and observe my toddler a lot. He discovers the world through embodied knowledge practices. Tasting, touching, grappling, etc.
As adults, our knowledge acquisition practices do rely more on abstract, propositional text. But much of what we do is experiential and participatory. Away from reductive views of knowledge and more towards holistic, embodied views. I think this allows us to resist the slop.
AI is a new technology, supercharged, yes, but simply a new technology. When aggregating the answer retrieved by Google when I ask it a question, I check the links provided. I check the source material and the sources used by those sources. Sometimes I don't because I am being lazy and already know the answer, but don't want to spend time looking through the tomes in my library for the minute details.
Books. Books are a form of technology. So are magazines, documentaries, Fox News, PBS, Britannica, Wikipedia, Michael Moore, etc. Darwin checked Listler's geological findings. We check AI.
The most important skill, besides reading and arithmetic, taught in schools is critical thinking, parsing fact from fiction, lie from truth, observation, conjecture, hypothesis, theory, analysis, and evaluations.
YES. I heartily agree that humans are too willing to hand the keys of the car of life over to AI; won't be long before AI will have "done" more living than most humans, and it will be telling humans about what it's like to really live. Here's to being one of the humans who refuses to stop touching reality, real life, having embodied experiences, and wrestling with information to know what I really think.
This made me think, if we cannot stop the AI train, then maybe we should flip the existing epistemic infrastructure.
If AI can produce knowledge regarded as truth in seconds or minutes, what if we built new mechanisms that ensures this knowledge follows a process that forces it to go head-to-head with reality, or rather real human experience?
If the knowledge passes the reality test, proves that it can anchor to something outside itself, itself being AI, then and only then does it become truth.
Essentially, a kind of bridge between AI content and reality...anyway just a thought, because if I am being honest, I don't even know how that would work exactly.
Exactly right. We are the inheritors of an astounding civilization built over millennia. Let's be thoughtful stewards and not amuse ourselves to idiocracy...
Maybe it’s not knowledge until it’s verified. Duh. Eliminate the confusion, belief nonsense.
Interesting questions in the last paragraph. Can I draft an essay that address them and send to you?