The Calculator / AI Analogy: A Broken but Useful Tool for Thought
There's Much More Math Out There!!! 1st Podcast: Companion Newsletter
Welcome to the 4th Newsletter of the Educating AI Substack.
This week I am excited to bring you my first podcast, There’s Much More Math Out There. In this podcast, I have a conversation with Jon Graetz, a 30-year-plus veteran math teacher, about the often-cited comparison between ChatGPT and the calculator. In our conversation, we try to get to the heart of this analogy, and as we do we dig more deeply into the complicated ways technological tools have changed classrooms for the better over the past several decades.
Jon Graetz has worked for many years for an independent school in the Midwest, has pioneered many of his school’s organizational computing systems, writes extraordinarily difficult problems for major math competitions, performs actively in a barbershop music group, teaches all level of high school mathematics, and stands out as an extraordinary example of a lifelong learner and global citizen. It is my pleasure to share his wisdom, insight, and humor with my growing audience.
If you haven’t come across the calculator/ChatGPT analogy yet, it goes as such: Just as educators eventually embraced calculators in the classroom, discovering that they allowed students to spend more time on higher level cognitive tasks, so too teachers will gradually come to accept the presence of generative AI tools which will allow students to increase the pace of the writing process opening up more space for critical, creative, and dialogical work.
On the surface, there appears to be pretty sound logic at work in this analogy, but as just a sampling of the responses to the analogy have indicated, the logic has some flaws that need to be addressed and remedied for it truly to be serviceable.
Here are a few responses to check out for those who are interested:
Inside Higher Ed: ChatGPT Both Is and Is Not like a Calculator
Forbes: Banning ChatGPT in Schools Is like Banning Calculators in Math Class
Simon Willison: Think of Language Models like ChatGPT as a “Calculator for Words”
Insights@Questions: Stop Comparing ChatGPT to Calculators!
If you would like to read up on the history of calculators and their impact on math instruction, check out the following links:
Hack Education: A Brief History of Calculators
LA Times: Calculator Issue: Math Class: Calculators Don’t Add Up
Robert Kaplinsky: What if We Didn’t Teach What a Calculator Could Do
E. Paul Goldenberg: Thinking and Talking About Technology in Math Classes
Before we get to the podcast, I wanted to share a few interesting finds from this week:
Harvard Medical School: AI Predicts Future Pancreatic Cancer
Google Research Team: React: Synergizing Reasoning and Acting in Language Models
SemiAnaysis: GPT-4 Architecture, Infrastructure, Training Dataset, Costs, Vision, MoE
The first document is an amazing piece of instructional forecasting. I will spend an entire issue unpacking it sometime in August or September. You may have seen a lot of these studies about AI and pancreatic cancer. Researchers usually focus on this cancer as it is so quick and deadly. The Google article is hard to parse, but focuses on the strength and weakness of the chain-of-thought reasoning capacity that LLMs exhibit. Critical stuff for educators to understand. I may take a shot at it later in the year. The final piece is also a very good read and is rather easy to follow. From a fellow substacker, this one indicates that LLMs will most likely not progress past a certain point of cognitive development due to a confluence of a number of design, finance, and processing variables. In other words, if AI is to truly progress to the next level to general artificial intelligence, it will need another serious revolution in transformer technology. More on that in my newsletter devoted to the near-future of LLMs.
Community Outreach:
Help me get this information to more educators, administrators, researchers, and parents as the school year draws closer.
As an important member of this growing community, I would like to take a moment to thank you for your time, focus, and attention. I would also like to ask you to use a little more of your precious time to read the following appeal and to consider helping me to further expand our community:
If you find Educating AI a valuable source of information, please send out a personal invitation with a link to this Substack to 3-5 friends or acquaintances. Ultimately, anyone who is currently working in a school setting or who has children or family attending schools can benefit from learning more about the intersection between generative AI tools and education. Add a personal note encouraging your friend or acquaintance to subscribe to Educating AI to get a weekly newsletter with the most up-to-date information. Thanks in advance!!!
Podcast Companion:
Below, I am copying my introduction and conclusion to the today’s podcast. Thanks again for listening. And thanks again to Jon Graetz for sharing his time and knowledge!!!
Introduction:
Welcome to the first podcast for the substack Educating AI. I am your host Nick Potkalitsky and I am excited to use these podcasts to dive deeper into questions about the integration and implementation of generative AI into today’s secondary and college classrooms.
As soon as ChatGPT dropped back in November of 2022, analysts began comparing its impact upon schools to that of the calculator during the 1970s and 1980s– a time some math teachers refer to as the Math Wars.
Since then this analogy has been contested on a number of fronts. Some simply don’t like to equate language with mathematical processes, and thus reject the analogy entirely. Others want to amend the analogy by proposing that ChatGPT is something like a “calculator of words.” Then, others like Inside Higher Ed go for the graduate school response by stating somewhat paradoxically that “ChatGPT both is and is not a calculator.”
As will be the mode in most of my posts and podcasts, I will gather information and present it in a manner that will help you to make up your own mind.
For today’s podcast, I have reached out to a mathematician friend of mine – a career math education – Jon Graetz – to discuss the way calculators have impacted math instruction. Our conversation is wide ranging as we explore the analogy between the calculator and ChatGPT from several different vantage points. At times, we dive deep into math, but as I am a humanities instructor, I will be there to help translate for the layperson. I hope you enjoy. Please subscribe to Educating AI and join our growing community of educators, administrators, researchers, and parents. Please leave some comments with thoughts and questions. Thanks for tuning in!
Conclusion:
Thanks for listening to the inaugural podcast for my substack Educating AI. I appreciated how Jon framed the conversation about calculators and AI in terms of larger instructional purposes and objectives. When a tool contributes significantly to some larger learning goal, it is advantageous for the educator to figure out a good method for incorporating that tool into the classroom even if it causes some serious discomfort along the way.
When our conversation veered into the workspace, things quickly got more complicated and controversial. Yes, white-collar workers and workers from historically privileged identity groups do manage to use technological advancements to push beyond tedium, find deeper purpose and meaning, and reach for the professional stars. But as Jon reminded us, this is rarely the story for blue-collar workers, persons making close to or less than minimum wage, and persons in historically disadvantaged identity groups. Big technological revolutions all too often leave these workers scrambling to find new work and new training programs when their employment is lost to efficiency or automation.
In this context, teachers are now wondering how much of their jobs might be automated five or ten years from now. Just this week, I read an article that Harvard will attempt to use generative AI tools as teaching assistants in large lecture classes in the upcoming school year. At the same time, The Modern Language Association and the Department of Education have both strongly committed themselves to the irreplaceability of teachers, but humanistic philosophies have rarely proven sufficient defense against capitalist forces in the long, strange history of American education.
The one matter that our conversation neglected to consider is the large language model’s tendency to hallucinate. To me, this is where the analogy between calculators and ChatGPT breaks down most severely. If the analogy were to hold, calculators would produce answers to mathematical questions that appear to be correct but in fact are not. Talk about Godel’s incompleteness theorem. We could have a little fun with this and say that the calculator would produce cardinal numbers that were in fact imaginary. During the Math Wars, calculators eventually became reliable learner tools precisely because they yielded reliable results. While several authors have tried to re-envision ChatGPT as a creative calculator and thus save the analogy, it is my sense that these authors are working too hard. Start fresh with a new rationale. Embrace the creativity of ChatGPT as a foundational point.
My studies in rhetoric and narratology have taught me to celebrate the imagination with all its flaws, deficiencies, and foibles. At the moment, I am wondering if we can conceptualize the text produced by large language models as somehow neither fiction nor non-fiction. Perhaps pre/fictional or pre/non-fictional. As text that has no ostensible human author, it does not have adequate signaling that allows the reader to determine its fictionality or non-fictionality. Only when a human or institution takes that AI-generated text and either edits it or uses it wholesale—either in a completely creative capacity or to make truth statements—does it rise to the level of a fictional or nonfictional text. As a result, signaling becomes one of the most pressing educational concerns moving forward. How will schools ensure that AI-generated text is signaled by authors as either fictional or non-fictional so that readers can respond to it appropriately?
Don’t let anyone tell you that the humanities are no longer necessary. If anything, the humanities are more necessary than ever. Never was there a more critical time to learn how to interpret the complex interchange between authors, texts, and audiences.
One final note:
I would just like to thank my sitar teacher of many years, Hasu Patel, for her love and infinite patience. She will be playing at the Cleveland Art Museum on Friday, August 25th. Please go check her out if you are in the area. To be honest, it would be well worth a drive of several hours or more. The raga I am playing in the background is an ancient one known as Yaman Kaylan. There are many recordings of this beautiful early evening raga by many artists out there for you to explore and enjoy. Here is a link to Hasu Patel’s most recent recording of Yaman Kaylan. It is lengthy and sublime. Enjoy!!!
Be well, everyone!!!
Until next time!
This is Nick Potkalitsky, Educating AI