Teacher Frustration with Classroom AI Escalates
And how I am responding: AI continues to expose the critical importance of reflective and metacognitive skills in modern education
Midjourney: “Frustrated Teacher in the Style of Renaissance Paintings of St. Augustine”
Introduction: Refocusing Educating AI
I feel like I've been overextending myself recently at Educating AI. In short, I've been attempting to use this valuable shared space to upskill the entire teaching profession, build an innovative AI-responsive curriculum, take deep dives into philosophical, literary, and rhetorical conundrums sparked by AI, and provide regular updates about my teaching practices.
All the while, I try to periodically open up this space to other writers to share their insights and enrich the ongoing conversation. Needless to say, I'm sensing that the average reader might be losing the thread, so I'm proposing that for the near future, I go back to the roots of Educating AI: updates from the classroom, occasional deep dives, and periodic shares from friends in our ever-growing network.
The Trend of Teacher Frustration with AI
Today's article–and I promise that moving forward, my pieces will be more concise–focuses on a somewhat disturbing trend on social media and now in the popular press, where career teachers channel their frustrations at AI on their students and then take the dramatic step of deciding not to teach at all rather than figuring out how to guide them into better practices with AI. In late September, Time jumped on the bandwagon and gave college instructor Victoria Livingstone a grand platform for airing her grievances at her graduate students, who were unwilling to change their behavior after cursory lectures about misuse and societal and environmental harm:
"My graduate students, many of whom were computer scientists, understood the mechanisms of generative AI better than I do. They recognized LLMs as unreliable research tools that hallucinate and invent citations. They acknowledged the environmental impact and ethical problems of the technology. They knew that models are trained on existing data and therefore cannot produce novel research. However, that knowledge did not stop my students from relying heavily on generative AI."
While I appreciate the difficulty of the situation being described, I take issue with the pointed analysis that follows. Commendably, Livingstone moves beyond an initial frustration, ushering her students through some reflective engagements with AI inside her classroom:
"In one activity, my students drafted a paragraph in class, fed their work to ChatGPT with a revision prompt, and then compared the output with their original writing. However, these types of comparative analyses failed because most of my students were not developed enough as writers to analyze the subtleties of meaning or evaluate style."
Critique of Livingstone's Approach
Instead of drilling in and helping students develop the skills needed to do such comparative analyses–arguably, the reflective, meta-cognitive skills that should always be expressly taught in humanities and research classes–Livingston rushes straight to a very strong conclusion about her students' intentions and limitations in service of a seeming pre-existing desire to leave academia:
"Students who outsource their writing to AI lose an opportunity to think more deeply about their research…With few expectations, my students were not willing to enter those uncomfortable spaces or remain there long enough to discover the revelatory power of writing."
And so, Livingstone declares in the essay's signature moment, "I quit."
Now, I know that there are times when you need to move on, and for Livingstone, the difficulty of working with AI—the alteration to her conception of what school and writing should look like and feel like—served as the dividing line in the sand. But to blame the students, who if she had only titled her concept differently might have engaged more fully, seems short-sighted and potentially harmful to the profession at large.
A Different Perspective: My Classroom Experience
Now, pivoting to my own experiences in the classroom just this last week, I'm finding very different possibilities within the same challenges Livingstone describes. Granted, I'm not working with graduate students, so our teaching approaches aren't truly comparable. Yet, I think my experiences might offer a fresh perspective on how to navigate the AI landscape in education.
Implementing AI in Student Research Processes
This past week, my students engaged in deep AI work cycles which I described at length in a post on LinkedIn. In this work, students are leveraging several months of work on an independent research topic and asking AI to assist in rounds of brainstorming exercises. This occurs before they return to the pressing question of the particular gap, uncertainty, or ambiguity in the existing literature they seek to address in their overall project. Having identified these elements, they then use them as criteria for the next round of source selection.
In years past, this discovery of a gap, uncertainty, or ambiguity—given the ambitious scope of many of our amazing students' questions—could overtake the majority of the research process. Not necessarily a bad outcome, but for a high school student eager to deliver tangible results at the end of what amounts to a 1.5 school year process, it's not ideal either. So this year, I'm exploring the possibility of using AI in addition to secondary sources to triangulate around the gap, uncertainty, or ambiguity we seek. It's an approach similar to that of many doctoral and postdoctoral students I closely follow in various research communities. Rather than regarding AI as an "opportunity stealer," in our highly curated context, it can emerge as a "possibility generator."
Benefits of AI Integration in Education
Indeed, this week's work—while not perfect in its implementation—has proven to be a success. Students are emerging from the process excited about the next round of research, and they're doing so with more time left in the school year. As a result, they'll have more opportunity to create more pointed fieldwork or immersive experiences in their efforts to answer their research questions. The daunting, multi-month process of what used to be a traditional academic literature review has transformed into a dynamic, much more conversational investigation of possible pathways forward—rather than a static dissection of a single, predetermined pathway. This is research that evolves as you create it. This, folks, is the future. Livingstone, you are missing out!
Looking Forward: Collaborative Adaptation to AI
But enough—sometimes people just need to move on. Sometimes folks need to find another path. However, it is my hope over the next several months to show you that rather than a future filled with woe, detection, and uncertainty, there is much to be excited about. Our work now is to develop pragmatic and human-centered systems that help our students bridge the past to the present. They didn't ask for these tools or these challenges. And neither did we. But now that we're in the midst of them together—if we work collaboratively—we can find a path forward.
Check out my LinkedIn post on providing students with adequate emotional support as they develop AI work cycles.
Nick Potkalitsky, Ph.D.
Introducing Two AI Literacy Courses for Educators
Pragmatic AI for Educators (Pilot Program)
Basic AI classroom tools
Cost: $20
Pragmatic AI Prompting for Advanced Differentiation
Advanced AI skills for tailored instruction
Cost: $200
Check out some of my favorite Substacks:
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: An cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s Mostly Harmless Ideas: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Amrita Roy’s The Pragmatic Optimist: My favorite Substack that focuses on economics and market trends.
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Riccardo Vocca’s The Intelligent Friend: An intriguing examination of the diverse ways AI is transforming our lives and the world around us.
Jason Gulya’s The AI Edventure: An important exploration of cutting edge innovations in AI-responsive curriculum and pedagogy.
I've told students in my Master Degree Project Managment Course that they can use AI as long as they acknowlege it and reflect on whether it's getting the right outcomes. Several have learned to use it to challenge their own writing vs. having it write things for them. Others found it was easier to write their own than rely on the AI. I'd love to do more but at least the conversation is starting to happen.
Nick, like you, I’m sick of hackneyed articles in which teachers complain about students who refuse to stop using AI (or social media, their phones, etc). It amounts to the equivalent of water-cooler talk among my colleagues when we get together these days (and we do teach graduate students). But even in casual conversation, I hear both pro and con anecdotes about AI. What worries me more is the way these “I’ve had it” articles mask the insidious impact of the technology on conceptions of self and truthfulness.
With Livingstone’s article, she does note in passing that she was doing various AI exercises with her class, but they aren’t detailed. I suspect the article editing cut a lot of nuance, perhaps even pushing for a more definitive ending from her - “I quit!” - than she had originally. I could be wrong, but I also recognize the way magazine editors (being one myself) hone narratives. It’s not unlike the way bots push writers toward definitive conclusions (what I call “bowtie” endings) that undercut the complexity of real life.
AI models have been trained on massive amounts of formulaic human writing, after all. So, just as Livingstone shouldn’t be blaming her students, who are responding to the social context of their tech discipline, I wouldn’t blame Livingstone too harshly. Writing by humans, especially for publication in slick outlets for general audiences, has long been massaged and mediated by editors. One benefit of running AI exercises with my journalism students comes not in highlighting what a bot gets wrong but in describing the article-formula cliches that pop up and why they undercut originality.
I don’t blame Livingstone for expressing frustration. That part seems real enough, even if many of us have yet to pinpoint our uneasiness about the impact of this technology on how we and our students take in the world.