I love what you’re into here! Even in a very practical, everyday sense, we expect students to interact with and learn from one another. Why not more frequently with bots? AI convos helping deepen learning around a concept, listening to and speaking with students. In my school, most of our students speak one of dozens of language primarily, while developing English. Speaking and listening (as well as the other domains of reading and writing) can be aided by AI much like peers would do: interaction, digging into a discussion. This is something I plan to speak with teachers about, as we tinker with voice activated and aided AI.
The role of AI is shifting from tools to agents, though they still require human supervision to ensure ethics and effectiveness. In education, AI should serve as a tool to assist teachers rather than act as independent agents.
This is a good case study for why I like to entrust AI to specific tasks / roles vs. just 'trusting' AI as an amorphous, autonomous, system. I wrote about that in more detail here:
According to Alain Damasio, a sociologist and "disciple" of Baudrillard, technological tools can never be neutral. Their development is funded by investors, these technologies are designed to create strong dependency, and they are built to exploit our desires and habits. This implies they are not neutral but intentionally influential and manipulative.
In his latest book, just published and not yet translated into English ('Vallée du Silicium'), Damasio brilliantly exposes how new technologies, including artificial intelligence, shape our lives and behaviors. This book is not a diatribe; it’s a series of chronicles and encounters. Several interview videos with Damasio are available on YouTube, and you can activate subtitle translation to understand them better.
Nick, I really like the distinction you make between tools and agents - and why humans need to control these uses for all sorts of ethical reasons. The agent concept may be part of how AI researchers talk about uses ( or potential uses), but for other humans like tech reporters and general readers, “agent” does sound anthropomorphic, shifting the larger discussion away from what matters - such as effective guidelines for using generative AI as tools in a classroom. I really want to see guidelines like the ones you suggest widely adopted across curriculums.
Nick, this one is gold. The framing of Sarah reveals in a powerful way the multiple agencies teachers must be granted and taught to use while underlining the significance of the professional in charge of the classroom. Sarah is not Siri nor Alexa. Sarah is vital. Sarah needs a raise. Sarah deserves professional respect and autonomy. Sarah needs a final say in the topics for professional development. Sarah needs a community. Somehow when you package this and prepare to speak—we need to get you some new luggage so you can travel on an itinerary across districts everywhere for the opening day hoopla, the district conferences with keynotes. My adrenal glands aren’t what they used to be, but this post got my neural loops humming. Very nice my friend!
Great points. Despite the AI reading emotions and tone (as you point out, something that will become increasingly common), I perceive that the nuance of a teacher:student relationship will never diminish nor be replaced. The AI will improve but teachers will always piece together 1,000 interactions and observations a day that will help inform their decisions. I love how your post digs into this.
I also think about how LLMs cull info based on their knowledge base. I have recently noticed that when AI is writing, it uses the word “myriad” incorrectly. Why? I assume because of how most people use it: incorrectly. So, the AI will continue to need to be fact-checked…by teachers and education experts. We are all fallible and need to triangulate our information AND rely on human intuition and emotional intelligence…AI won’t replace that. Yes, they are agents, not our replacements.
Yes, I am under no delusion that I will displace the term "agent." As Alejandro mentions, it has a long history. It is a deep part of AI culture. It is a hope as old as Turing's hypothetical calculating machines, if not older. The algorithmic that can function independently of an operator. What I am trying to accomplish here is simply to collapse the binary--tool/agent--into something more scalar---and then see how it reconfigures the cultural, institutional, educational, policy, ethical discourses that are converging.
Heidegger himself is much blurrier than this post lets on. If you dig in deeper, a tool is something that is invisible to the user. A tool becomes an object again, when it breaks or resists use in some way.
I hope to--in a follow-up post--think more about this idea of resistance. I think Socratic bots/agents are encoded with resistance that makes their ontic status as tools quiver. They alternate between thing and object. I personally think this is the ideal state for a tool or agent, anything along that AI tool-agent spectrum.
The agent terminology comes from classic AI, especially the discipline of computer simulation, where the goal was to design software architectures that embraced the paradigm of agency: systems that can perceive and reason about their status in a given environment and plan accordingly to solve some task.
As other terms in AI, the word "agent" is an example of wishful mnemonics, but it is well established in the discipline and all experts know what we mean. Of course this doesn't diminish the potential for misuses especially when the media is involved.
Omg Alejandro in reading pedagogy a very useful term like “phonics” back in the day has been corrupted beyond resurrection. Even the word “writing” has undergone a popular “revolution.” “Writing” means “spelling.” There is a humpty dumpty linguistic process gaining on the Richter scale so we are seeing cracked eggs all around—mature, useful, potential black boxes in Latour’s sense like “pasteurized” broken to pieces. On behalf of those of us beholden to you people who brought AI across the goal line apologies for this slight. The paradigm of agency still lives. As I “write” Dr. Fauci is being questioned by Jim Jordan for his role in fabricating the COVID virus. Humans are known for biting the hand that feeds them. Put some ointment on the wound because these critters are rabid.
I love what you’re into here! Even in a very practical, everyday sense, we expect students to interact with and learn from one another. Why not more frequently with bots? AI convos helping deepen learning around a concept, listening to and speaking with students. In my school, most of our students speak one of dozens of language primarily, while developing English. Speaking and listening (as well as the other domains of reading and writing) can be aided by AI much like peers would do: interaction, digging into a discussion. This is something I plan to speak with teachers about, as we tinker with voice activated and aided AI.
Very instructive and helpful article. Thanks, Nick!
The role of AI is shifting from tools to agents, though they still require human supervision to ensure ethics and effectiveness. In education, AI should serve as a tool to assist teachers rather than act as independent agents.
This is a good case study for why I like to entrust AI to specific tasks / roles vs. just 'trusting' AI as an amorphous, autonomous, system. I wrote about that in more detail here:
https://www.polymathicbeing.com/p/dont-trust-ai-entrust-it
According to Alain Damasio, a sociologist and "disciple" of Baudrillard, technological tools can never be neutral. Their development is funded by investors, these technologies are designed to create strong dependency, and they are built to exploit our desires and habits. This implies they are not neutral but intentionally influential and manipulative.
In his latest book, just published and not yet translated into English ('Vallée du Silicium'), Damasio brilliantly exposes how new technologies, including artificial intelligence, shape our lives and behaviors. This book is not a diatribe; it’s a series of chronicles and encounters. Several interview videos with Damasio are available on YouTube, and you can activate subtitle translation to understand them better.
https://youtu.be/ofs-9_yzcvY?si=RuRUupi7UO41oSN9
Sounds amazing. I will check this out. Thanks for bringing Damasio into the conversation.
Baudrillard was prescient in so many ways. "Neutrality" is an optical illusion. A tool designed for a particular purpose.
Nick, I really like the distinction you make between tools and agents - and why humans need to control these uses for all sorts of ethical reasons. The agent concept may be part of how AI researchers talk about uses ( or potential uses), but for other humans like tech reporters and general readers, “agent” does sound anthropomorphic, shifting the larger discussion away from what matters - such as effective guidelines for using generative AI as tools in a classroom. I really want to see guidelines like the ones you suggest widely adopted across curriculums.
Nick, this one is gold. The framing of Sarah reveals in a powerful way the multiple agencies teachers must be granted and taught to use while underlining the significance of the professional in charge of the classroom. Sarah is not Siri nor Alexa. Sarah is vital. Sarah needs a raise. Sarah deserves professional respect and autonomy. Sarah needs a final say in the topics for professional development. Sarah needs a community. Somehow when you package this and prepare to speak—we need to get you some new luggage so you can travel on an itinerary across districts everywhere for the opening day hoopla, the district conferences with keynotes. My adrenal glands aren’t what they used to be, but this post got my neural loops humming. Very nice my friend!
Thanks, Terry. I wrote Sarah in just for you... and for the benefit of my other readers.
When do I get to meet her?
Haha... Keep on reading, my friend.
Great points. Despite the AI reading emotions and tone (as you point out, something that will become increasingly common), I perceive that the nuance of a teacher:student relationship will never diminish nor be replaced. The AI will improve but teachers will always piece together 1,000 interactions and observations a day that will help inform their decisions. I love how your post digs into this.
I also think about how LLMs cull info based on their knowledge base. I have recently noticed that when AI is writing, it uses the word “myriad” incorrectly. Why? I assume because of how most people use it: incorrectly. So, the AI will continue to need to be fact-checked…by teachers and education experts. We are all fallible and need to triangulate our information AND rely on human intuition and emotional intelligence…AI won’t replace that. Yes, they are agents, not our replacements.
Yes, I am under no delusion that I will displace the term "agent." As Alejandro mentions, it has a long history. It is a deep part of AI culture. It is a hope as old as Turing's hypothetical calculating machines, if not older. The algorithmic that can function independently of an operator. What I am trying to accomplish here is simply to collapse the binary--tool/agent--into something more scalar---and then see how it reconfigures the cultural, institutional, educational, policy, ethical discourses that are converging.
Heidegger himself is much blurrier than this post lets on. If you dig in deeper, a tool is something that is invisible to the user. A tool becomes an object again, when it breaks or resists use in some way.
I hope to--in a follow-up post--think more about this idea of resistance. I think Socratic bots/agents are encoded with resistance that makes their ontic status as tools quiver. They alternate between thing and object. I personally think this is the ideal state for a tool or agent, anything along that AI tool-agent spectrum.
Lucid as always!
The agent terminology comes from classic AI, especially the discipline of computer simulation, where the goal was to design software architectures that embraced the paradigm of agency: systems that can perceive and reason about their status in a given environment and plan accordingly to solve some task.
As other terms in AI, the word "agent" is an example of wishful mnemonics, but it is well established in the discipline and all experts know what we mean. Of course this doesn't diminish the potential for misuses especially when the media is involved.
We have been tracking a whole series of wishful mnemonics this year, Alejandro: "understanding" "learning" "memory," now "agency."
I like your caveat though. No matter what we decide to call them... these entities exponentially multiply the reach of bad actors.
Omg Alejandro in reading pedagogy a very useful term like “phonics” back in the day has been corrupted beyond resurrection. Even the word “writing” has undergone a popular “revolution.” “Writing” means “spelling.” There is a humpty dumpty linguistic process gaining on the Richter scale so we are seeing cracked eggs all around—mature, useful, potential black boxes in Latour’s sense like “pasteurized” broken to pieces. On behalf of those of us beholden to you people who brought AI across the goal line apologies for this slight. The paradigm of agency still lives. As I “write” Dr. Fauci is being questioned by Jim Jordan for his role in fabricating the COVID virus. Humans are known for biting the hand that feeds them. Put some ointment on the wound because these critters are rabid.
Thank, Tom!!! I am living what you are up to on Substack!!!
Helpful reflections and insights as always Nick!