Beyond Co-Intelligence: π¨π° π³ππ Reviews "Teaching with AI"
Guest Post by Rob Nelson
Dear Amazing Readers of Educating AI,
It is with pleasure I introduce today's guest writer, Rob Nelson, Executive Director of Academic Technology and Planning at the University of Pennsylvania. In December 2023, Rob captured and crystallized a moment in our online community's reception and interpretation of AI's impact on today's schools.
In his piece, "On Techno-pragmatism," Rob turned to philosopher William James for insight, concluding that:
"Uncertainty in the face of emergent machine capabilities requires new facts obtained through empirical investigation and new principles arrived at through reconsideration of existing truths."
Since then, Rob has continued to work alongside James, building a more substantial critique of how we theorize, historicize, and utilize AI in our work cycles and classroom settings. In the following piece, Rob not only offers a welcome review of a significant new book on AI and teaching, JosΓ© Antonio Bowen and C. Edward Watsonβs Teaching with AI: A Practical Guide to a New Era of Human Learning, but also a larger call for the AI x Education community to historicize their AI-adapted and -infused practices as they engage in them.
As Rob notes mid-essay: βStill, we have the question of why we should spend so much time and effort exploring what is basically an upgrade to a sixty-year-old technology. A major element of the conventional wisdom is that we must teach about and with AI. If we donβt, we will fail in our responsibilities as educators. βEat your AI vegetables, or else!β is not as compelling as βHereβs a cookie!β but the argument that AI may not taste good but is good for you is how the urgency about AI gets explained.β
As a scholar of educational practice and technology, Rob Nelson's perspective is particularly valuable. By heeding his call to historicize our current practices, we can create the intellectual and institutional space to move through and beyond the imperatives of the momentβthe notion that students and teachers must use AI "or else." This approach embodies the pragmatism Rob rekindled in Decemberβa pragmatism that, while perhaps shifting values and changing tactics, remains grounded in open-mindedness and a focus on practical applications true to its origins.
I hope everyone enjoys this piece!
Nick Potkalitsky, Ph.D.
For the past eighteen months, the encounters teachers have had with generative AI have been shaped by two distinct narratives. The first is a moral panic over a pre-existing but under-discussed homework crisis brought into the spotlight by students using ChatGPT. The fact that many students choose to cheat rather than complete assigned academic work predates generative AIβs public debut by several years, but ChatGPTβs ease of use and cheap-as-free pricing got teachers and journalists talking about the problem in a way that Chegg and paper mills never did. The second narrative is about how generative AI might ease the administrative burden on teachers. The but-teachers-can-use-it-too angle brought us stories about Large Language Models (LLMs) as grading assistants and lesson plan generators, as well as anxiety-producing predictions about how the new technology will automate teachers out of a job.
Educators who wanted to look deeper into what was happening with AI could find good journalism in the remaining places that employ journalists and excellent writing on the topic on platforms like LinkedIn and Substack. But if you wanted a book, at least one full of insight, practical advice, and concise summaries of complex issues, there wasnβt anything available. And for good reason. Only someone brave would put their name on something sure to be out of date by the time it comes out.
Fortunately, JosΓ© Antonio Bowen and C. Edward Watson are brave, and Teaching with AI: A Practical Guide to a New Era of Human Learning is an excellent introduction for those who prefer getting their information from thin pieces of pulped wood inked with text. Even if you prefer the blue light of a digital screen, a physical book will give your eyes a nice break and perhaps interrupt (in a good way!) how you have been thinking about this fast-changing technology.
As someone who has been struggling to keep up with how higher education has been responding to the threats and promises of generative AI, turning the pages of a book that usefully captures the conventional wisdom was helpful, even though that conventional wisdom has moved some distance in the months since the authors reviewed their galley proofs.
Book publishing happens slowly. Generative AI happens fast.
I suspect many who havenβt had the time to keep up with the whole AI thing are looking for this book, even if they donβt know it. They have been busy running a lab, on sabbatical, or finishing writing a book of their own. They just saw that email from their chair asking them to update their fall syllabus with guidance about the appropriate use of AI.1
If they ask, you now have a book to recommend. Teaching with AI will help answer questions like Should I change all my writing assignments because I heard ChatGPT writes better than my students? Or, Can I use ChatGPT to improve my grading rubrics or do a first pass at grading papers?
Teaching with AI offers an informed, consensus view from a positive but not delusional perspective about how educators should approach generative AI. Bowen and Watson are clear-eyed critics of the narratives that shaped the initial reception of ChatGPT, providing an excellent account of the homework crisis and AI-assisted teaching. They embrace the notion that LLMs can be effective collaborators, emphasizing not just the administrative effort it might save but the role that LLMs can play in brainstorming and creating class activities and assignments. Reading it helped me identify a shift I see happening in the discourse about generative AI from mostly positive to increasingly skeptical.
Bowen and Watson are not over-the-top enthusiasts, but they wrote the book at a moment when the promises of AI felt revolutionary in a good way.
The initial frames of acceptance that developed in response to the two initial narratives were mostly positive. Everyone can see that many students are not doing their own academic work, so letβs update our teaching practice using AI to make assignments more engaging and relevant. Bureaucratic demands on teachers seem to be increasing, so letβs use AI to reduce time spent on busy work. The methods for using the new technology to fulfill this potential are captured in the title of another recent book on AI, Co-Intelligence, by Ethan Mollick. Bowen and Watson approach generative AI with the same sense of pragmatic experimentation. And like Mollick, they anthropomorphize LLMs as helpful agents:
AI is a new eager assistant capable of finding information, creating visualizations, writing drafts, offering feedback, and analyzing data. It will alter your workflow and allow you to do other things.
Bowen and Watson are not over-the-top enthusiasts, but they wrote the book at a moment when the promises of AI still felt revolutionary in a good way. As they put it, βGenerative AI is going to change the way we think, but not just at work. Collaboration with AIs will change the nature of human thinking.β Some of the headers in Chapter 2 A New Era of Work tell the story: AI Will Change Relationships, AI Will Change Every Job, it will mean Thinking Differently: Faster, Better, and More Fun.
If you read my review of Mollickβs Co-Intelligence, you know Iβm skeptical of this way of understanding what LLMs are and how they function as an educational technology. It isnβt that they arenβt labor-saving devices or useful learning tools; it is that by treating the outputs of an LLM as if they were the product of a mind, we misunderstand how LLMs work and obscure what they might be good for.
In fairness, Bowen and Watson put βscare quotesβ around concepts like βhallucinationβ and βself-attentionβ that anthropomorphize the computational processes they describe, but many of the practical suggestions in the book imagine the LLM as a human interlocutor or teacher. My view is that pretending that an LLM is a teaching assistant instead of a teaching machine gives away too much, blinding us to the negative impacts of using these tools and their shortcomings as educational technology.
The problem is that so much of what we have seen demonstrated by the ed-tech companies is based on a vision that personalized tutors and assistants in the form of chatbots will be the disruptive technology that remakes education. As Audrey Watters explains in her indispensable history, Teaching Machines: The History of Personalized Learning, this vision predates ChatGPT and is based on a truncated version of the history of schooling, wild optimism about the use of computers to teach, and skepticism about the value of human teachers.
Bowen and Watson do not get carried away by this Silicon Valley dream, but it informs their assumption that the answers to many educational problems are to be found in individualized attention provided by an LLM serving as a singular co-intelligence.
The shortcomings of LLMs as a teaching technology
Just because students use LLMs to write their papers doesnβt mean they want to use them as collaboration tools and tutors. And just because teachers can use an LLM to create artifacts to satisfy bureaucratic demands for reporting and documentation doesnβt mean they will want them as co-teachers. Part of the conventional wisdom of the past year was that teachers should experiment with the tools, learn how they work, and see for themselves what all the fuss is about. That has been my line, too. All this experimentation reveals the limits of LLMs as much as it explores their potential.
The technology is not living up to expectations. Given the hype, how could it? The demo is always more exciting than actually using the product. And the more teachers and students use generative AI, the more we find that it doesnβt actually solve the problems teachers and students need solving. Read Mathworlds by Dan Meyers or the Beyond ChatGPT series by Marc Watkins to see the more skeptical direction the discourse is headed. The short version is that chatbots just are not that good at helping most students learn, or most teachers teach.
Students who are already motivated to learn on their own may indeed find an LLM-powered tutor useful, the same way they might find an hour to themselves in a library an opportunity to explore their interests. But these students are a minority. As Lawrence Holt demonstrates, much of the research being pushed by personalized learning enthusiasts suffers from a key problem: it includes only those students who used the products βas recommended,β which turns out to be around 5% of participants. The vast majority are not motivated to engage or encounter barriers to using the tools, so they do not meet the criteria to be included in the results. Holt calls this βthe 5 percent problem,β and a version of it extends to the discourse excitedly describing using generative AI. We have only heard from the maybe 5% of teachers and writers who engage enthusiastically with the tools.
Bowen and Watson are not blind to this problem. They understand some students do not see value in doing academic work. βStart with the cookie, not the recipe,β they say. In other words, demonstrate the educational value of the work you are about to do before you jump into the doing. Here, they state the problem:
Students have a justifiable sensitivity to and deep loathing of βbusy work.βΒ there are always going to be tedious skills that students need to master, even if AI (or a calculator) can do them faster, but students often interpret anything laborious as teachers just being βmean.β Technology has now made it harder to see the benefit of doing things βthe hard way.β The motivation for school work is also more obvious to teachers: (a)Β we like school (in part because we were good at it and the rewards further motivated us) and (b) we know our subjects as experts (so the connections between things are more easily apparent to us). AI will magnify the need to explain and make benefits visible.
But what happens when students consider it and decide they donβt want the cookie? Many of the suggestions in the book will look like βbusy workβ to students. Go talk to your chatbot about how to improve your paper does not strike me as a recipe for student success. It may be a failure of imagination on my part, but I find trying to elicit feedback from an LLM mostly frustrating.
The least compelling part of the book is Chapter 9 Feedback and Roleplaying with AI. The prompt engineering tips and the suggestions for eliciting useful feedback assume a level of engagement that I do not think exists for most students. The novelty of telling a chatbot to generate text and having a weirdly bland version of it suddenly appear will wear off. That is unless your goal is to complete the assignment with minimal effort so you can do something meaningful.
Learning is hard. Generative AI is easy.
Eliciting feedback, even good feedback, takes effort and motivation. As a human, I find reviewing a paper with a student challenging. My students do, too! Why would they put in the effort with a chatbot when they donβt show up for office hours?
βEat your AI vegetables, or else!β is not as compelling as βHereβs a cookie!β but this is how the urgency about AI gets explained.
It isnβt just the approach to writing that seems problematic. I have come to believe that role-playing with a chatbot is not a meaningful educational experience. As John Warner makes clear, there are real problems with using historical chatbots, and the examples given in Teaching with AIββ Talk to me as if you were trumpeter Miles Davis.β or βRespond as if you were the historic figure of Rabbi Hillel.β seem much more likely to produce eye-rolls than engagement. Even worse, the potential for unintended consequences when students encounter the sometimes weird or problematic outputs makes the exercise seem unwise. I do not agree with their assessment that βClaude was especially good at adopting the voice of Miles Davis, you dig?β And, I do not agree that βAnswer me as if you were a subject of the Tuskegee syphilis study. Ask me ethical questions about what happened to you.β is the way to teach history the history of this awful event.
My point here is not to pick on the bookβs specific suggestions but to suggest that the linguistic habits that are embedded in human interactions lead us into problematic situations when we treat LLMs as people. Anthropomorphizing chatbots is entertaining, and perhaps in elementary school, a group assignment, monitored by a teacher, to engage in a chat with Thomas Jefferson might seem like a fun way to engage in historical thinking, a sort of historical reenactment on the cheap. Iβve grown more skeptical since I first considered the question.
Donβt miss my upcoming essay in π¨π° π³ππ revisiting my take on using historical chatbots.
Are there ways to approach LLMs as an educational tool that do not assume it has agency or a mind? I hope so. Bowen and Watson have some good ideas in that direction. The best parts of Teaching with AI focus on the power of teachers to motivate students and stress human learning processes. Chapter 10 Designing Assignments and Assessments for Human Effort starts with the observation that βCheating is often a symptom that students do not understand or value the reward of doing the work themselves.β The authors offer an argument and template for how teachers can structure assignments to encourage students to put in the effort. Similarly, the best parts of the Chapter 12 AI Assignments and Assessments suggest prompting students in groups to work directly with an LLM to achieve a goal or develop a project. Turning work with an LLM into a social activity where students collaborate or explore the limitations of an LLM as they work toward a common goal has much greater potential than treating it like a personal tutor.
Eat your AI vegetables, or else!
Still, we have the question of why we should spend so much time and effort exploring what is basically an upgrade to a sixty-year-old technology. A major element of the conventional wisdom is that we must teach about and with AI. If we donβt, we will fail in our responsibilities as educators. βEat your AI vegetables, or else!β is not as compelling as βHereβs a cookie!β but the argument that AI may not taste good but is good for you is how the urgency about AI gets explained.
To students, the message is, βIf you donβt learn to use AI and the person next to you does, guess who gets the job?β To teachers, the message is, βThese tools will make your job easier, so youβd better learn them or theyβll replace you.β I doubt the predictions of massive social transformation will play out as expected, and whatever happens, it wonβt change things as quickly as AI enthusiasts hope. But I also doubt we should be so confident that we know anything about how generative AI will impact education in the long term. It may be that what looks like important skills based on the current versions of LLMs will turn out to be less valuable than many think. Adopting new technology takes time and proceeds in fits and starts, with the occasional disastrous experiment along the way.
Of course, learning the latest technical skills is a good bet for landing your first job. If a high starting salary is a studentβs desired outcome, then studying technical fields related to AI seems like a good bet. Or, maybe in a few years, the AI investment boom will be over, and sustainable energy will be the hottest field. Perhaps talking to an LLM to get it to do something useful is a skill, like typing or using spreadsheets, that will come in handy for that first job. Heck, talking to an LLM may replace typing! Now that the skepticism is more prevalent, our confidence that we know much of anything for sure about the future is fading.
Still, there is a fundamental tension between the urgent demand that, as Bowen and Watson put it, βthinking and working with AI should be integrated into every part of the curriculumβ so that βwe can equip all students for this new economyβ and the bromide that βa complete education in the liberal arts has never mattered more.β Critical thinking is just as important as ever because the cultural traditions and habits of thought embedded in the liberal arts mitigate the ahistorical enthusiasm for the latest technology. Studying the liberal arts reminds us not just to keep humans in the loop but to keep human social needs central to the discourse and that supposedly easy solutions to social problems often end in tears.
Cultural analysis of generative AI has lagged behind the hype, but it exists. I expect the slower-moving work of critical reflection and scientific inquiry will work against many of the assumptions that structured the conventional wisdom about AI over the past year. Some of what happens next will debunk the hype, and I expect weβll see the capital investment bubble pop and AI enthusiasts turn down the volume. But, critical thinking about the social contexts of technology will also help us better understand the long-term educational potential of generative AI, even as there is a backlash.
The most valuable aspect of Teaching with AI is how seriously it takes the perspective of students. The authors make clear that students are just as confused as their teachers by what is happening. Reading Teaching with AI helped me understand my own journey from moderately enthusiastic to mostly skeptical. When you hand your copy to that colleague who needs to catch up on AI, tell them to come see you after they finish it. Theyβll need to talk about what comes next. We all need to talk about what comes next, and not just among our fellow teachers. Students need to be a part of the discourse as well.
My deep thanks to Nick Potkalitsky for his insight on these topics and for his support for my writing and that of other new writers in this space. He is a big reason π¨π° π³ππ is finding an audience. You should subscribe to his Substack!
1 Bowen and Watson use the general term AI without specifying what they mean, but in Chapter 1 AI Basics, they provide a concise, knowledgeable overview of artificial intelligence and machine learning that puts into context the confusing alphabet salad of LLMs and GPTs. They also trace the historical development of technologies that led to transformer-based cultural artifact generators like ChatGPT, Claude, Stable Diffusion, etc.
Check out some of my favorite Substacks:
Terry Underwoodβs Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suziβs When Life Gives You AI: An cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffisβs Mostly Harmless Ideas: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenbergβs Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelsonβs AI Log: Incredibly deep and insightful essay about AIβs impact on higher ed, society, and culture.
Michael Spencerβs AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashirβs The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nestβs Why Try AI?: The most amazing updates on AI tools and techniques
Riccardo Voccaβs The Intelligent Friend: An intriguing examination of the diverse ways AI is transforming our lives and the world around us.
Jason Gulyaβs The AI Edventure: An important exploration of cutting edge innovations in AI-responsive curriculum and pedagogy.
Thank you so much for this thoughtful review and the respectful pushback against some tendencies in the book!
I'm curious about your comment "I find trying to elicit feedback from an LLM mostly frustrating." I find it really useful... with Claude 3.5 right now. We have a prompt library on MyEssayFeedback.ai https://myessayfeedback.ai/oer/types-of-feedback-library
Good conversation. It is sometime worth just seeing what a model like Sonnet will do if asked to improve based on metrics you feed it, or in light of a piece or series of pieces you either find comparable or at the next step you in development. One of my guest writers Alan Knowles stresses that there is value now saving your writing projects at different stages in their development so that you can teach your preferred model how writing usually evolves over time for you. If the model has this deeper history, perhaps the feedback will bump up from the general to the particular.