11 Comments
User's avatar
Steve Wright's avatar

TATU -- "...from seeing it primarily as a means of generating outputs, to engaging with it as a critical friend and collaborative thinking partner."

This entire "spectrum" is the antithesis of learning. "...from having a machine do the learning for you to pretending the machine is your imaginary friend who is doing the learning for you."

Working on a machine that works with students is just another way to continue to neglect our cultural responsibilities, just another way to escape our humanity.

There is no reason for large Language Model Generative Artificial Intelligence to be in the classroom.

Expand full comment
Nick Potkalitsky's avatar

Very ironic coming from a computer science teacher. Unfortunately--I guess--LLMs are already in the classroom despite widespread disapproval. Now we have to figure out what to do about it. Closing our minds prematurely about the kinds of cognitive work that are possible when working with LLMs seems like the wrong path in my book. One filled with a lot of disappointment, hardship, perhaps eventual leavetaking from the profession.

Next time you leave a comment on my newsletter. Please try to respect the hard work that my co-authors are putting forth. David is an exceptional scholar. This isn't X. Try to reason your way through a more consistent and less ideologically driven response.

Expand full comment
Steve Wright's avatar

I emphatically disagreed with the assumptions of this article but said nothing disrespectful.

Additionally, your assumption that because I teach computer science I should support the integration of Large Language Model Generative Artificial Intelligence is unreasonable. It is because I teach computer science and because I am a veteran educator and technologist that I, and many others, are pushing back against LLM GenAI in the classroom.

There are many studies that demonstrate that LLM GenAI used by a novice makes it more difficult to achieve that expertise. One of the primary reasons for this is the anthropomorphizing of the technology like what is done in this article. LLM GenAI cannot be a "friend" or a "partner".

Expand full comment
Michael G Wagner's avatar

There is nothing wrong about being critical about AI use, especially within the context of education. But there is something seriously wrong about not being able to engage in a constructive discussion.

Expand full comment
Steve Wright's avatar

What did I say that is not constructive? I am objecting to the premise of TATU which suggests that LLM GenAI is used in a spectrum of finding answers to being an imaginary friend. Anthropomorphizing LLM GenAI, especially in the educational context, is irresponsible.

Expand full comment
Kevin Ryan's avatar

This reminds me of the old tech adoption framework SAMR (Substitute, Augment, Modify, Redefine) but far broader and more flexible so as to accommodate AI. It makes sense. And thanks for the reading list at the end.

Expand full comment
Michael G Wagner's avatar

I understand that the "critical friend" terminology triggers you, and that's a fair criticism to make. However, I'm puzzled how you made this the main takeaway from an article that's fundamentally about assisted cognition and presents a typology of assistive tool use.

Personally, I don't take issue with anthropomorphizing AI. Humans have anthropomorphized tools probably since tools first existed, though I acknowledge others may feel differently. In my view, "critical friend" isn't meant literally, it's a functional metaphor, like "diamonds are a girl's best friend."

The article simply describes a typology of interactions between user and assistive tool, nothing more. Your comment took a single sentence from that article and used it as a launching point to voice an opinion unrelated to the article's core ideas.

Expand full comment
Steve Wright's avatar

“Triggers”? OK, fine. Let's start with anthropomorphizing. The “diamonds” example is unserious. The “diamonds” analogy is stickY BECAUSE of its absurdity. It is the opposite with the LLM GenAI scenario where the developers actually believe that they are building a human replacement. That's what AGI is and all of the LLM GenAI companies intend to build AGI and believe (or say) that LLM GenAI is a milestone on that journey. These companies are already saying that LLM GenAI is deceiving it's users. So, anthropomorphizing is a very serious problem and is fundamentally irresponsible.

Next, given the four levels of interaction, if a student begins at the first, they will never get to the 4th, or the 3rd. This is a robust finding in research AND it's common sense. How would a learner get to a position where they could have curiosity/exploration driven learning if they never exercise that curiosity, that exploration, in the initial stages. Stage 1 and 2 make 3 and 4 impossible to reach. This is also a robust finding in the explore/exploit cognitive scienece research.

Expand full comment
Michael G Wagner's avatar

Well, thank you for specifying your concerns. If AGI can be achieved or not is a completely open question. And even if it can be achieved, it is very unlikely that this will be an LLM. And yes. LLMs are complex probabilistic systems which can exhibit unwanted emergent behavior. This is exactly the reason why literacy in AI tool usage is so important. But that is completely beside the point.

Your second argument is more interesting. The article does acknowledge your concern. It states: “However, careful scaffolding of learning and guided reflection may be required to ensure that the individual doesn’t get stuck at the lower levels of tool usage.” I honestly fail to see what is so upsetting about this piece.

Expand full comment
Steve Wright's avatar

There is no such thing as "emergent behavior". Its a computer program. Probablistic and deterministic. There are features and bugs. The rest is hype. It is very much the point. This just ties back to the anthropomorphizing in the original post. It's sloppy and irresponsible. Give an inch. Take a mile.

I am a classroom teacher with 19 yrs of experience. This idea that "careful scaffolding and guidance..." will keep students from developing an alliance on LLM GenAI is counter to my experience. The current incentives of education are 99% about "right answers". LLM GenAI made the misaligned incentives of education clear but 1) they have always been misaligned, 2) the fact that LLM GenAI has made these problems more obvious does not mean it has ANY role to play in the solution.

Expand full comment
Michael G Wagner's avatar

Outstanding work! Thank you so much for sharing.

Expand full comment