Thanks for the shoutout! Obviously, to write well still requires writing, even with AI. We can drive anywhere we want to be, but many of us still walk and run!
I do believe that college students can learn many things about writing using AI. Students learn the most when collaborating with each other on writing. With the right mindset, this can be true when collaborating with AI.
Thanks, Lance! I just like your style and want to get the word out. I feel like in K-12 we are searching for some kind of core justification for the continuation of some of our more traditional methods. This piece speaks to those concerns specifically. That said, I am already signing my students up with Claude this week and diving in. A slow collaborative approach, both with the application and with each other. Trying to straddle the divide, sometime more successfully than at other times.
Rereading this just makes me appreciate Nick's writing a lot more. His ability to break down complex topics in such a concise and comprehensive way is amazing.
I think writing involves many different cognitive and linguistic abilities that work together in complex and dynamic ways. These abilities are not the same for every writing situation, but change depending on what the writer wants to achieve, where they are writing, and what they are writing about.
Thanks, Bechem. What you say rings true! Now that I am a few weeks out from this post, it rings even truer. Not every writing task is about learning through writing. Sometimes, we are just recording pre-existing insights. Sometimes, as in the case of AI-infused writing, we are learning about how technology can change the nature of writing. This piece is intentionally polemical. But I always strive to see a wider view... and that is gradually emerging with time...
Thanks for the restack! I really appreciate that I have found another writer and researcher dedicated to maximizing the educational potential of Gen AI.
Thanks, Simon. I appreciate the comment and the engagement.
I imagine what will come to pass will be the development of specific AI writing curriculum. Cummings is already pioneering this work. His Intro to Professional Writing focuses entirely on using Gen AI to produce high-quality Professional Writing. But what this will look like for younger persons is still anyone's guess? Anyone who claims to know is just hypothesizing at this point. Three issues such curriculum needs to address: (1) What writing skills do young learners need to know first before they can make the most out of Gen AI? What writing skills can young learners grow and develop through use of Gen AI? What writing skills does the use of Gen AI threaten or compromise?
Thought provoking stuff Nick, will be interested to see where you take this. If students today begin to shortcut the required developmental steps using AI tools my fear is that writing will degrade in value over time. For humans, extensive reading is a huge factor in good writing. A focus on reading mediated by AI bots may be an irresistible shortcut for students today, but can only degrade deep comprehension of any topic. We will end up at best with Cliff notes education, or at worst Cliff's notes education when Ciff was tripping.
A few of your statements shouted at me:
You say "LLMs primarily learn through reading."
As you mentioned some in the field flag using the verb 'to learn' as problematic when it relates to AI. I would add the verb 'to read' to that. LLMs parse, tokenise, and model sentences, they don't read or comprehend them like humans do. There is no agent there, the way we use language often complicates this topic.
You say:
"AI models, as objects generated through human thought and industry, are becoming mind-like."
This is highly contentious, if so, how so?
You do make a good point that is not widely acknowledged,
'the core functionalities and capacities of LLMs remain relatively stable throughout writing-as-process'. LLMs create models during ingestion of training data. Unlike humans they don't gain new insights as they produce output, humans enhance learning through writing and thinking about what we consume. LLMs can't do that. Much unwarranted fear surrounding AI centres on this false idea that AI will gain agency and act on it's own. These AI systems are simulation boxes not conscious machines.
I do plan to write about some of the themes you touch on. Appreciate your perspective here.
Boodsy, Let me say how much I appreciate the close reading and the detailed feedback.
Yes, you caught me in a highly contentious claim. Nice catch. I am much closer to your mind on this than more words let on.
You are very much anticipating where I am going. First, what does this mean for students. Second, what does this mean for reading.
I am so glad I am giving you some materials to work with. This essay really felt like a breakthrough piece for me. I am glad it is serving a purpose in another's persons work. That makes the struggle all the more worthwhile.
Be well, my friend!!! Keep me posted on the development of your own ideas!!!
Thanks Nick for this well documented essay. One thing popped in my mind when I read: "On the surface, models appear to be solving problems, but beneath the surface, models are simply reproducing solutions already present in their data sets."
Your point on "models only reproducing solutions" had me thinking: so, models are basically just humans with faster processors?
Hi, Pascal!!! It is great to hear from you again. You have been one of my biggest supporters, and it means a lot to me that you are reading my text closely.
In this essay, I am trying to establish a foundational difference between AI and human. I do believe that AI tools as they draw on significant training data can offer human users real solutions worthy of consideration when approaching complex problems at work and in their daily lives. But the important question for me is--what are these tools doing as they are producing these solutions? Are they developing their skills and competencies as they generate test? Are they becoming better machines? Most of the research say, No!
On the other hand, when humans dig into big solutions-focused writing projects, they get two-fold benefits: (1) the possible solutions generated through the writing process, (2) the cognitive growth that attends the process of writing. Win-win.
As as rule, I am much closer to Boodsy than perhaps my essay may let on. I think there is a fundamental divide between what machines do and what humans do when "thinking." While it is incredible how much progress has been made in the machine-realm via the manipulation of verbal symbols, I personally don't feel ready to bestow anything like consciousness on machines yet. I am more a Turing Test 3 kind of guy. Turing Test 3 requires linguistic competency to be grounded in a full sensorial apparatus to be regarded as full consciousness. For philosophical reasons, I think language needs to be grounded in real-world interactions for it undergird full-fledged consciousness. But I am open to be persuaded otherwise in light of convincing evidence.
Per Cummings writing with Gen AI it seems that so long as the human involved is extremely knowledgeable on the writing process and all that unfolds within it — like a literal academic expert — then the generated output can be sophisticated and well-formed. But what if it’s Joey from the corner who mostly grunts his words at passersby and hasn’t reduced words to writing since his 2nd grade teacher made him?
Thanks for the shoutout! Obviously, to write well still requires writing, even with AI. We can drive anywhere we want to be, but many of us still walk and run!
I do believe that college students can learn many things about writing using AI. Students learn the most when collaborating with each other on writing. With the right mindset, this can be true when collaborating with AI.
Thanks, Lance! I just like your style and want to get the word out. I feel like in K-12 we are searching for some kind of core justification for the continuation of some of our more traditional methods. This piece speaks to those concerns specifically. That said, I am already signing my students up with Claude this week and diving in. A slow collaborative approach, both with the application and with each other. Trying to straddle the divide, sometime more successfully than at other times.
Rereading this just makes me appreciate Nick's writing a lot more. His ability to break down complex topics in such a concise and comprehensive way is amazing.
I think writing involves many different cognitive and linguistic abilities that work together in complex and dynamic ways. These abilities are not the same for every writing situation, but change depending on what the writer wants to achieve, where they are writing, and what they are writing about.
Thanks, Bechem. What you say rings true! Now that I am a few weeks out from this post, it rings even truer. Not every writing task is about learning through writing. Sometimes, we are just recording pre-existing insights. Sometimes, as in the case of AI-infused writing, we are learning about how technology can change the nature of writing. This piece is intentionally polemical. But I always strive to see a wider view... and that is gradually emerging with time...
Thanks for the restack! I really appreciate that I have found another writer and researcher dedicated to maximizing the educational potential of Gen AI.
Thanks, Simon. I appreciate the comment and the engagement.
I imagine what will come to pass will be the development of specific AI writing curriculum. Cummings is already pioneering this work. His Intro to Professional Writing focuses entirely on using Gen AI to produce high-quality Professional Writing. But what this will look like for younger persons is still anyone's guess? Anyone who claims to know is just hypothesizing at this point. Three issues such curriculum needs to address: (1) What writing skills do young learners need to know first before they can make the most out of Gen AI? What writing skills can young learners grow and develop through use of Gen AI? What writing skills does the use of Gen AI threaten or compromise?
Thought provoking stuff Nick, will be interested to see where you take this. If students today begin to shortcut the required developmental steps using AI tools my fear is that writing will degrade in value over time. For humans, extensive reading is a huge factor in good writing. A focus on reading mediated by AI bots may be an irresistible shortcut for students today, but can only degrade deep comprehension of any topic. We will end up at best with Cliff notes education, or at worst Cliff's notes education when Ciff was tripping.
A few of your statements shouted at me:
You say "LLMs primarily learn through reading."
As you mentioned some in the field flag using the verb 'to learn' as problematic when it relates to AI. I would add the verb 'to read' to that. LLMs parse, tokenise, and model sentences, they don't read or comprehend them like humans do. There is no agent there, the way we use language often complicates this topic.
You say:
"AI models, as objects generated through human thought and industry, are becoming mind-like."
This is highly contentious, if so, how so?
You do make a good point that is not widely acknowledged,
'the core functionalities and capacities of LLMs remain relatively stable throughout writing-as-process'. LLMs create models during ingestion of training data. Unlike humans they don't gain new insights as they produce output, humans enhance learning through writing and thinking about what we consume. LLMs can't do that. Much unwarranted fear surrounding AI centres on this false idea that AI will gain agency and act on it's own. These AI systems are simulation boxes not conscious machines.
I do plan to write about some of the themes you touch on. Appreciate your perspective here.
Boodsy, Let me say how much I appreciate the close reading and the detailed feedback.
Yes, you caught me in a highly contentious claim. Nice catch. I am much closer to your mind on this than more words let on.
You are very much anticipating where I am going. First, what does this mean for students. Second, what does this mean for reading.
I am so glad I am giving you some materials to work with. This essay really felt like a breakthrough piece for me. I am glad it is serving a purpose in another's persons work. That makes the struggle all the more worthwhile.
Be well, my friend!!! Keep me posted on the development of your own ideas!!!
Thanks Nick for this well documented essay. One thing popped in my mind when I read: "On the surface, models appear to be solving problems, but beneath the surface, models are simply reproducing solutions already present in their data sets."
Your point on "models only reproducing solutions" had me thinking: so, models are basically just humans with faster processors?
Hi, Pascal!!! It is great to hear from you again. You have been one of my biggest supporters, and it means a lot to me that you are reading my text closely.
In this essay, I am trying to establish a foundational difference between AI and human. I do believe that AI tools as they draw on significant training data can offer human users real solutions worthy of consideration when approaching complex problems at work and in their daily lives. But the important question for me is--what are these tools doing as they are producing these solutions? Are they developing their skills and competencies as they generate test? Are they becoming better machines? Most of the research say, No!
On the other hand, when humans dig into big solutions-focused writing projects, they get two-fold benefits: (1) the possible solutions generated through the writing process, (2) the cognitive growth that attends the process of writing. Win-win.
As as rule, I am much closer to Boodsy than perhaps my essay may let on. I think there is a fundamental divide between what machines do and what humans do when "thinking." While it is incredible how much progress has been made in the machine-realm via the manipulation of verbal symbols, I personally don't feel ready to bestow anything like consciousness on machines yet. I am more a Turing Test 3 kind of guy. Turing Test 3 requires linguistic competency to be grounded in a full sensorial apparatus to be regarded as full consciousness. For philosophical reasons, I think language needs to be grounded in real-world interactions for it undergird full-fledged consciousness. But I am open to be persuaded otherwise in light of convincing evidence.
Notably, my grounded position is much in the minority these days. To catch up on the debate, check out The Gradient's interviews with Steven Harnad: https://open.substack.com/pub/thegradientpub/p/stevan-harnad-symbol-grounding-ai-cognition?r=2l25hp&utm_campaign=post&utm_medium=web
and Terry Winograd: https://open.substack.com/pub/thegradientpub/p/terry-winograd-ai-hci-language-cognition?r=2l25hp&utm_campaign=post&utm_medium=web
The Winograd interview is much easier as an entry point. The Harnad is rather difficult to follow until things come together near the end.
Be well, my friend.
Per Cummings writing with Gen AI it seems that so long as the human involved is extremely knowledgeable on the writing process and all that unfolds within it — like a literal academic expert — then the generated output can be sophisticated and well-formed. But what if it’s Joey from the corner who mostly grunts his words at passersby and hasn’t reduced words to writing since his 2nd grade teacher made him?