It's Time to Listen to Teachers About AI!
Teachers' Experience as the Foundation for AI Integration and Implementation
Greetings, Educating AI Readers!
I want to start with a big thank you for your dedication and commitment to my Substack. Your comments, click-throughs, shares, likes, and restacks mean the world to me. As of now, Educating AI boasts 2,338 subscribers, 26 paid subscribers, 2,925 followers, and is recommended by 41 different Substacks. The network effect from you, my dedicated readers, has been phenomenal. Every week, more people discover my site, bringing fresh ideas and insights to our growing community.
As a token of my gratitude, today's article is dedicated to all the teachers who have just completed their first full school year with 3rd-gen AI in tow. You are doing the incredible work of creating curriculum and practices "without a net" for the implementation and integration of AI in today's schools. Your efforts are truly inspiring.
Centering AI Training in Teacher’s Experiences
This article is written in response to some approaches emerging in AI training circles that I find a bit disconcerting. I’m sure readers like
and will be able to cite previous instances of similar dynamics arising in the history of educational training and upskilling. Be sure to check out the comments below for their thoughtful shares. I have come to rely on Guy and Terry to provide a broader historical and theoretical lens for my insights and approaches. I wanted to take some space in this article to give you both a big shout-out!I find it unproductive to call out trainers by name or to copy and paste their posts. Instead, I will speak in generalities and let my readers connect the dots. Know that I do believe these folks have good intentions. By and large, they see AI as either (1) a tremendous opportunity or (2) a matter of great urgency. They often shift into a didactic mode of discourse, essentially telling teachers exactly what they should do without actually consulting with them about their everyday challenges and experiences.
Granted, there is a bit of marketing bravado and differentiating rhetoric mixed in, something we all practice to greater or lesser degrees in new serial publications like Substack, Medium, LinkedIn Newsletter, beehiiv, etc. With so much content being written each day, you have to add a dash of polemic to get noticed in the near-continuous information stream.
That said, many trainers begin their programs with top-down proclamations that delegitimize current educators—particularly in the K-12 space—who are attempting to create pragmatic solutions in the face of technology disruptions with no true analogues. Just this week, I came across trainers insisting, as an introductory pitch, that teachers stop assigning traditional unassisted writing assignments, stop attempting to listen for their students' authentic voices, stop assigning homework, stop grading assignments. The implied subtext is that if you are attempting to do such things, you are doing a disservice to your students, you are technologically backwards, you are not embracing AI fully, and you are being too cautious, thereby holding back an educational revolution.
By contrast, I want to highlight amazing educators and trainers like
, , and , who approach this space humbly and are always equipped with insightful questions for their fellow educators. In a recent post, Doan asked a group of us teacher-trainers, “Should students be taught to write with AI?” No assumptions or judgments at the starting position—just an honest inquiry.What unfolded was a completely different kind of exchange, something much more collaborative and “bottom-up.” The implied subtext in this approach is that teachers constitute a vital source of knowledge about how AI can and should be implemented and integrated into today’s schools. Teachers are not a problem that ed-training and ed-tech need to fix; they are the foundation of the solution. Well done, Doan! I hope others learn from your example!
Good Reasons for Slow Adoption in K-12
In Ohio, K-12 AI adoption is moving slowly, and with good reason. Unlike in professional and academic spaces where users have already consolidated core writing competencies and skills, K-12 is a specialized environment focused on the cultivation of these foundational skills. No definitive research has yet emerged addressing how these new technologies, when used exclusively as some trainers insist, impact the development of these essential skills.
Additionally, K-12 education is currently stretched to capacity by other pre-existing challenges, including:
Chronic student absenteeism
Student hunger and malnutrition
Students reading significantly below grade level
Overburdened teachers and staff
Inadequate funding and resources
In many districts, addressing AI is a luxury issue, as is access to the technology. When your student body is chronically absent, underfed, and reading four years below grade level, AI literacy is going to fall to the wayside. No questions asked.
How to Center Teacher Voices During an AI Training Session
This past week, I had the opportunity to put my method into practice while working with a group of early adopters at a PreK-8 school in my area. Although I had prepared two meticulously arranged slideshow presentations for the meeting, I decided to start by asking the group of three educators what they wanted to accomplish. It turned out that two of the three participants had already developed deep, meaningful practices of AI implementation and integration and proceeded to teach me some amazing and novel practices for 30 minutes.
Our conversation then shifted toward developing a school-wide approach for greater engagement with AI. In this dialogue, we couldn't avoid discussing and comparing different AI tools and applications. Again, my early adopters had valuable insights into their favorite resources.
Since the primary users at the school are in 6th through 8th grade, we discussed the need for an AI access point that offered greater security than general-purpose LLMs like Gemini or ChatGPT. The team was inclined towards Khanmigo, which they were already using at the teacher level, and found student access relatively affordable at $30 a year. Khanmigo’s Socratic interactivity was particularly attractive for its use as a writing assistant.
Once again, I was present as an advisor, helping this team design a system that worked best for their needs. Each school needs to make its own decisions about how to move forward with AI technology. What impressed me most about this school was its proactive approach in making such decisions and its desire to implement a school-wide strategy.
This commitment to a school-wide rollout was reconfirmed when the administrator invited me back to lead an all-faculty training in August focused on Khanmigo, along with two follow-up trainings: one for lower school faculty on materials and applications to assist with teaching and lesson planning, and another similar training for middle school. The overall goal of these trainings—suggested by the early adopters—is to position AI as a “teaching assistant.” To these educators, this pitch will have the greatest chance of opening the door to widespread adoption.
Who am I to object? I am the outsider here. In such initial contacts, I am here to learn and listen in order to fashion trainings that help schools be successful, not to push my preexisting agenda.
K-12 vs. College: Divergences/Convergences
As I write this, I increasingly suspect that the significant divergence in training approaches reflects the widening gulf between academic and K-12 environments in terms of AI integration and application. However, from my recent experiences as a first-year writing teacher, I believe there is actually much greater continuity between these spaces than initially meets the eye. The assumption that college first-years have mastered core writing skills and competencies is likely a false one. Therefore, the extent to which AI impacts the acquisition of these skills and competencies at the college level remains an open question.
Digging deeper, I can imagine that college teachers, like their K-12 counterparts, prefer working with trainers who start by asking questions about their current struggles and successes rather than demanding radical changes to their practices and answering questions later.
As we move forward, I must insist the voices of teacher remain central to the enterprise of teacher training. Teachers are the frontline practitioners who understand the nuanced realities of the classroom, the unique needs of their students, and the practical challenges of implementing new technologies. Their insights, experiences, and feedback are crucial in shaping AI tools and strategies that are truly effective and supportive of educational goals.
Let’s Write a New AI Training Narrative
As we navigate this transformative period, we must avoid reducing teachers to characters in a simplistic narrative of stagnancy versus progress. Not every story needs a scapegoat. Authors who rely on scapegoats short-circuit their own thinking, overlooking the areas where real, pragmatic solutions are generated. Instead, let's collectively work on writing a better version of this AI story, one that values and incorporates the expertise and insights of teachers, ensuring that AI integration enhances the educational experience for all.
Check out some of my favorite Substacks:
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: An cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s Mostly Harmless Ideas: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Riccardo Vocca’s The Intelligent Friend: An intriguing examination of the diverse ways AI is transforming our lives and the world around us.
Nick, I can't speak to the K-12 environment; I just don't know enough about what's going on there or what is going on there in the last 35 years. I've been thinking a lot this week about the beginnings of the use of the web in higher education. That was one of the last times there was a technology that anyone could use for free that also had such transformative power. Committees and task forces were formed, workshops were held, money and release time was made available for professional development. Eventually, departments like the one in which I work were formed to support its use.
The environment is very different this time around. We have had three decades of web use on campuses (Netscape Navigator, which was the first browser for most of us, so perhaps a parallel to ChatGPT arrived 30 years ago this October). Most of the use was on desktops. Laptops were not that common yet, certainly not in classrooms, and all connections were wired until 1997. Cellphones were more likely to be seen on TV than on a midwestern campus. Educational technology was largely coming from universities and a few companies. Higher education was becoming more expensive but was still seen by most as a positive good. Privacy and security were minimal concerns. Intellectual property and acceptable use were the main worries. There was very little talk of ethics.
The environment today is far different. Higher education is expensive and often beleaguered. Most educational technology comes from for-profit corporations and startups. Over the last fifteen years or so, venture capital has discovered education. The infusion of money and business practices has created a highly competitive landscape, so that when something trendy and innovative like GenAI comes along, it is immediately incorporated by companies looking for an advantage. (An interesting side note is that the big textbook publishers, most of whom have had other forms of AI in their courseware for years, have been more cautious and slower off the mark with GenAI.) Laws and the interpretation of laws have changed. So have accreditation standards. So has the campus policy environment. Privacy, security, intellectual property, and acceptable use have become much more important over the last thirty years. Courses, especially those offered only online, have to follow design guidelines and faculty may have to undergo online teaching certification. There are larger numbers of instructional designers to provide advice and instructional technologists to provide support. In addition to training from the university on how to use the technologies, there are trainers from all of the educational technology companies. All are offering advice and guidance. This may be oriented towards a particular set of design principles or the way a particular product can be used. In the best cases, the faculty voice is heard. In the worst ones, and there are a few of these coming from ed tech companies, the technology forces the instructor to teach in a certain way. In the wake of the Pandemic, many instructors are more used to asking for and accepting advice on how to teach with technology than before. I think that is significant too. There are still plenty of experimenters, don't get me wrong, the faculty have all kinds of different skill levels and levels of comfort with using technology in their teaching, but the Pandemic does feel like a turning point in what they are willing to accept.
Something else that is different today is fear. There was some, but mostly wonder, in the early days of the web. The first wave of reaction to ChatGPT by professors was different. My involvement began because of cheating and plagiarism concerns. I am the lead support for Turnitin for our university system, and at that time was also the lead on automated proctoring software. Both had already exposed me to a lot of ethical concerns about the products we were using, while the proctoring programs introduced me to algorithmic bias and the coded gaze in 2020. I got involved in AI because of the fears and the ethical concerns that were manifest already in December and January following ChatGPT's release.
This is new in my experience. I think it is partly shaped what's going on with Generative AI. The range and extent of the fears have changed in the last 18 months. While there is still a lot of concern about plagiarism and cheating, the broader ethical questions have come to the fore. Concerns about bias are much more prevalent. So are issues of equity, and threats to creativity, intellectual property, privacy, and security, as well as to society, the economy, democracy, the environment, and the climate.
All of this is much different from the early days of the web on campus. It means more policies and guidelines. For some it means more acceptance of the ways an application allows one to teach. For others it means more experimentation. Over all, it means even more attention paid to technology in teaching and learning than before.
I think we have been fortunate on our campuses to see a lot of room made for individuals and departments to work through policies and uses. We have also had fruitful collaboration between professors and instructional designers in making the technology beneficial to classes.
Now that more policies and guidelines are being introduced, there is an attempt being made to balance the freedom to experiment (or even reject AI) with the need to maintain quality and uphold ethical and legal standards. There is a lot going on. It is exciting, worrying, and often frustrating. I can see that in such an environment there is a temptation to dictate and hope that it can be held at bay.
Very thoughtful piece, Nick. My experiences leading workshops tells me that if you provide opporunities early for teachers to express their concerns, misgivings, hesitations, or (in this case) successes, you create an environment of openess and sharing that lays a strong foundation for teacher engagement. And the more teachers actively engage with a technology the more likely they are to develop a measure of confidence, comfort, and understanding to integrate the technology.