12 Comments
User's avatar
Noelia Amoedo's avatar

Thank you for defining and structuring with so much clarity all the risks I had only sensed around cognitive offloading. The one point that made me wonder is this:

"A human writer, however, might start the same sentence but suddenly pivot toward unexpected revelation: "The atmospheric carbon cycle involves the exchange of carbon between—wait, I just realized this is like the circulatory system in our bodies, pumping life's essential elements through planetary veins."

And I probably wondered because LLMs can also be quite good at finding metaphors. I guess the key lies in the "unexpected": a human would have the ability to come up with a metaphor that has not often been used (at least in digital form) before...

Expand full comment
Terry underwood's avatar

For me, writing down an idea sometimes morphs in medias res into something I hadn’t planned for. These are micromoments which could diminish if the bot becomes a stand in for a human drafter. It’s something I’m aware of irrespective of the value or originality of the linguistic leap from a flat place to a free fall.

Expand full comment
Noelia Amoedo's avatar

I do enjoy those micromoments :) and I understand it is not about the outcome, but about the process and what we lose by not going through it. Thank you!

Expand full comment
Brad Czepiel's avatar

Reading that was transformative - thank you.

Expand full comment
Terry underwood's avatar

Thanks, Brad. I appreciate your feedback.

Expand full comment
Nick Potkalitsky's avatar

This is the way the piece hit me!!!

Expand full comment
Stephen Fitzpatrick's avatar

Great piece, Terry, that lays out the stakes pretty clearly. This certainly buttresses the position of those who don't want AI anywhere near the classroom. Buried near the end of the essay, you make this observation: "Teach Critical AI Use: Help students understand when AI collaboration is beneficial versus when it undermines learning, including identifying AI hallucinations and evaluating source reliability." Given the myriad negative outcomes of student AI use that you cover, where precisely is AI collaboration beneficial? And, given the fact that millions of students are already using this technology entirely "unsupervised," as it were, how to put the genie back in the bottle? My takeaway is that, given the strength of these arguments, the overwhelming majority of teachers who are already inclined to take a hard pass on integrating / engaging with Ai in their work will not be remotely interested in taking the time to investigate when "AI collaboration" is beneficial. There is too much on the negative side of the ledger to take that risk. Where does that leave others who may be inclined to experiment? And, with regard to those who already believe AI is destructive to learning, what happens when they continue to "resist" and refuse to attempt to even consider designing instructive environments that "invite AI as a collaborative partner in thinking?" Without real examples of when AI can benefit student learning, I think anyone who reads this piece will draw the conclusion that AI has no place in education. And I don't think that is your intent.

Expand full comment
Terry underwood's avatar

I always appreciate your comments, Steve! Thank you so much!!! That said, I can't be responsible for how people read me. Scroll away. I don't pull my punches. I'm representing what I perceive to be reality. The truth is if teachers do not begin to reveal the multiple nagging risks of language machines in the environment, they will doom themselves and their students to living in a reality nobody wants. There is soooo much upside to learning to use bots really well. Btw. you're wrong when you say "Anyone who reads this piece will draw the conclusion that...." I already have empirical proof that readers understand my message completely. There is no easy way out. Period. Howl at the moon if you want to. Take your marbles and go home. That's my point. If you want examples of positive uses, read my posts. There are hundreds of examples. Go back two years and start there. Really. Nick P and I are working on two monographs with a ton of evidence from high school seniors that they can handle the truth a hell of a lot better than their teachers can. I'm not going to be able to tell you or anyone else how to do what you must do for yourself. Experiment. Above all, LISTEN to your students! For God's sakes, step outside yourself for an hour or two. Look around without blinders on. Forget about those teachers who are quaking in their boots. That's what pisses me off more than anything--these looky loos, what-abouters, how commuters, bystanders who whine rather than get to friggin' work. The biggest problem we face today is whining and not doing. See? You've raised my ire:)

Expand full comment
Stephen Fitzpatrick's avatar

I totally get it - and I agree with your message. But this is a pretty damning indictment of the impact of LLM's on student learning. I have read most of your other stuff (where do you find the time???) but I'm afraid most will miss the nuance. It's tough to glean if you read this piece alone that there is a tremendous potential upside. But I'm with you about getting in the arena. I'd love to see something about why (if you agree) it's perhaps much less of an issue with older writers. Though I suspect there are potential issues there as well. And what happens the "better and better" the output gets? Lots to chew on.

Expand full comment
Nick Potkalitsky's avatar

What you have to realize, Steve, is that even if teachers refrain from using AI in the classroom, it is already in the classroom. That is the paradox we are dealing with. Teachers stand tall in their opposition, but the gesture is largely symbolic and primarily serves the teacher, not the student. Terry and I are coming to a point where we are trying to use the impending risk as impetus to move a segment of the audience that might listen. The majority of opposition will read this and read into confirmation of their biases. That is a hard fact all writers acknowledge, particularly inside the echo chamber of the AI Ed debate.

Expand full comment
Terry underwood's avatar

Right. It’s sort of like Trumpism. Those who rally around the “my classroom my rules forever” side of the argument need to know, yes, we know the risks. But there are perhaps greater risks from ignorance and recalcitrant blinders. It’s fine if a teacher is aware of the whole picture and decides you know I don’t wanna use this. But the stakes are too high to let lack of imagination and knowledge decide the case. I do expect people to read the entire article. Is that asking too much? Then again it’s a free country—for now

Expand full comment
Terry underwood's avatar

Steve, we should probably have a phone chat. DM me.

Expand full comment