Excellent treatment of a subject that is just beginning to get the attention it deserves. One angle that might warrant further consideration is the manner by which the human component in these intelligence environments exerts their voice and cognitive framework upon the machine. The AI models are malleable to shaping if the user has sufficient material with which to work. The model’s response can be bent to more closely align with the human collaborator’s voice and cognitive framework with time and thoughtful interaction. When this alignment becomes increasingly tighter the human-AI system can become more productive without the atrophy costs you raise. Gotta do something about those m-dashes though 😉
Love to see Kevin here, he has been posting very thoughtfully on Facebook about AI capabilities over the past year or two (despite a torrent of critical/unreflective comments from our academic friends and acquaintances).
I've been using AI to assist with a post that I'm working on.
One thing I've found very helpful is presenting it as a speech. Giving a talk forces you to go over the lines again and again. It's very easy to nod along and to allow the AI to slip something past you; something you'd never say yourself. But if you're disciplined and force yourself to practise the talk again and again, you're much more likely to notice when the wording is subtly off (even it it's just being a touch more certain than you are personally).
However, if you aren't practising in this way, I expect AI assistance - at least at present - to subtly degrade the quality or your arguments/reasoning.
Throw AI over that and it's not only just lipstick on a pig, but greater dissociation of our minds from the intellectual paths we struggle through without chutes-and-ladders shortcuts.
Being more verbally eloquent on the surface and having a better inner grasp of key concepts isn't the same. This is just more of a slide downhill by outsourcing our reason and meaning-making.
If we rely purely on intellect I can see that you have a point. Society has tended to privilege intellect but there are other sources of intelligence. From a human perspective they come with different names: intuition, instinct, for example. I suspect there are many nuances within this.
Might the problem be, in part, that people don't trust or are less capable of accessing these forms of intelligence, especially when considering domains which are too complex to analyse with no perfect answers.
I am now off to research the grades and variations of intuition and instinct with the support of ChatGPT 😁
We have designed a Chrome Extension that does exactly "granular override, complete intervention tracking, and portable reasoning history". Happy to demo it, Charles.Fadel@CurriculumRedesign.org
Excellent treatment of a subject that is just beginning to get the attention it deserves. One angle that might warrant further consideration is the manner by which the human component in these intelligence environments exerts their voice and cognitive framework upon the machine. The AI models are malleable to shaping if the user has sufficient material with which to work. The model’s response can be bent to more closely align with the human collaborator’s voice and cognitive framework with time and thoughtful interaction. When this alignment becomes increasingly tighter the human-AI system can become more productive without the atrophy costs you raise. Gotta do something about those m-dashes though 😉
Love to see Kevin here, he has been posting very thoughtfully on Facebook about AI capabilities over the past year or two (despite a torrent of critical/unreflective comments from our academic friends and acquaintances).
I've been using AI to assist with a post that I'm working on.
One thing I've found very helpful is presenting it as a speech. Giving a talk forces you to go over the lines again and again. It's very easy to nod along and to allow the AI to slip something past you; something you'd never say yourself. But if you're disciplined and force yourself to practise the talk again and again, you're much more likely to notice when the wording is subtly off (even it it's just being a touch more certain than you are personally).
However, if you aren't practising in this way, I expect AI assistance - at least at present - to subtly degrade the quality or your arguments/reasoning.
Nope, I ain't buying this. Social media has made us dumber and worse at debate. It's really at the heart of the erosion of democracies.
https://www.forkingpaths.co/p/the-democratization-of-information
Throw AI over that and it's not only just lipstick on a pig, but greater dissociation of our minds from the intellectual paths we struggle through without chutes-and-ladders shortcuts.
Being more verbally eloquent on the surface and having a better inner grasp of key concepts isn't the same. This is just more of a slide downhill by outsourcing our reason and meaning-making.
If we rely purely on intellect I can see that you have a point. Society has tended to privilege intellect but there are other sources of intelligence. From a human perspective they come with different names: intuition, instinct, for example. I suspect there are many nuances within this.
Might the problem be, in part, that people don't trust or are less capable of accessing these forms of intelligence, especially when considering domains which are too complex to analyse with no perfect answers.
I am now off to research the grades and variations of intuition and instinct with the support of ChatGPT 😁
We have designed a Chrome Extension that does exactly "granular override, complete intervention tracking, and portable reasoning history". Happy to demo it, Charles.Fadel@CurriculumRedesign.org
Would love that
“AI intervenes with reasoning itself.” What will we be prescribed to think?