Navigating the Slipstream: AI’s Shift from Tool to Agent
Part 2 in Our Series on AI Tools and AI Agents
Greetings, Dear Educating AI Readers,
Before I begin, I want to thank my readers who have decided to support my Substack via paid subscriptions. I appreciate this vote of confidence. Your contributions allow me to dedicate more time to research, writing, and building Educating AI's network of contributors, resources, and materials.
Introduction
As AI systems become increasingly integrated into various aspects of our lives, it is crucial to understand the nuanced distinction between AI as tools and AI as agents. This distinction is not merely academic; it has significant practical implications for how we interact with and rely on these technologies. In this article, we will delve even deeper into Heidegger's philosophy to fully develop a concept of AI agency.
I must admit, I was pleasantly surprised by the popularity of last week’s article, and I am eager to continue exploring this material to capture everyone's interest. This week, I will be reading Heidegger "against the grain," reinterpreting his concept of "tool-being" to explore how AI and other technological applications can slip into the status of agents. This reinterpretation aims to reveal the subtle ways in which technology's seamless integration can alter our perception of its agency.
Before moving forward, I want to highlight an amazing post written by Mike Kentz this week about shifting away from product toward process, a subject I have been writing on since the winter. In his post, he unveils his "Stop Grading Essays, Start Grading Chats"© method. This cutting-edge approach takes writing with AI as its starting point, and I highly recommend all my readers check it out.
Recap of Previous Findings
In our previous article, we examined the traditional differentiation of AI as tools versus agents, focusing on degrees of autonomy and independence. We argued that the term "AI agent" is often misleading, as all AI systems, regardless of their level of sophistication, are fundamentally tools designed to augment human capabilities. The contention was that even the most advanced AI systems should be viewed primarily as tools, with their perceived agency being a byproduct of their functionality and integration into human activities. We prepared the way for an additional layer of analysis by suggesting that AI agency is not only a function of its technical independence but also a function of human relationality and perceptions about the AI entity.
All AI Agents are Tools: Regardless of their sophistication, AI systems are extensions of human intention and action.
Spectrum of Agency: AI agency is not binary but exists on a spectrum influenced by autonomy, independence, relationality, positionality, and human perception.
Perception and Relationship: The perception of AI as an agent often arises from the relationship and context in which the AI operates. For instance, an AI that adapts to teaching styles and student emotions may appear more agent-like due to its contextual integration and relational dynamics.
AI Agency and Human Perception
In this article, we will extend our analysis by arguing that AI agency is fundamentally a function of human perception, relationality, positionality, and overreliance. When we interact with advanced AI systems, such as OpenAI's GPT-4o, their sophisticated capabilities can blur the line between tool and agent.
In educational settings, for example, AI can adapt to different teaching styles and even detect students' emotional states, making it appear autonomous. This perception is further influenced by the context in which AI is used and the habitual nature of our interactions with it. Over time, users may become so accustomed to AI's seamless functionality that they fail to recognize when a problem is better solved through human ingenuity. Ethan Mollick's research underscores this point, demonstrating that human users quickly become habituated to AI, often letting down their guard and relying on it excessively, leading to worse outcomes compared to groups that do not use AI.
Heidegger's Notion of Tool-Being
Heidegger's idea of "tool-being" provides a useful framework for understanding how tools, including AI, often become invisible because they work so smoothly. In his book Being and Time, Heidegger explains that the more we use a tool without consciously thinking about it, the more it blends into our activities and fades from our awareness.
When a tool is used effortlessly, it becomes what Heidegger calls "ready-to-hand." This means the tool is fully integrated into what we're doing, almost like an extension of our body and intentions. We don't need to think about it because it functions effectively without necessitating focused attention.
However, when a tool breaks or doesn't work as expected, it shifts to what Heidegger calls "present-at-hand." Suddenly, we have to pay conscious attention to the tool, and it disrupts our activity. The tool is no longer just an extension of our will; it becomes an object that we must consider and examine directly.
Heidegger puts it this way: "The less we stare at the hammer-Thing, and the more we seize hold of it and use it, the more primordial does our relationship to it become." This means that tools work best when they blend into our actions, but when they stop working, we are forced to confront their presence.
This concept is crucial when considering AI. If we use AI without critical thought, it can start to seem like an autonomous agent rather than a tool. This shift in perception, where we see AI as an agent rather than a tool, happens because of the seamless way AI integrates into our lives. The more we rely on AI, the more we might stop questioning it and simply follow its lead, which can lead to overreliance and potential misuse.
To avoid this, users, especially in educational and professional settings, need to maintain a vigilant and reflective approach towards their engagement with AI technologies. This involves implementing regular audits, fostering discussions on AI ethics, and integrating training sessions focused on the critical use of AI. These strategies help keep AI's tool status visible and prevent it from unconsciously slipping into the role of an agent.
By actively questioning and critiquing the use of AI, we can ensure safer and more ethical implementations. This fosters an environment where AI serves as a true augmentation of human capabilities, rather than becoming an autonomous force with potential for misuse. This balanced approach enables us to leverage the benefits of AI while ensuring it remains a constructive and controllable tool within the fabric of human activity.
Practical Methods for Maintaining Tool-Status of AI
1. System Oversight and Evaluation
Regular Audits:
Periodically assess AI systems to ensure they function as intended and do not overstep their roles.
Example: Conduct bi-monthly reviews of AI-generated outputs in the classroom/workplace to identify overreliance or misuse and prompt necessary adjustments.
Explicit Role Assignment:
Clearly define the role of AI in each task to maintain human oversight.
Example: Assign AI the role of generating practice quizzes, but have teachers review and adjust the content to ensure that AI remains a tool rather than an autonomous decision-maker.
2. User Education and Awareness
Training Programs:
Educate users about the limitations and appropriate use of AI.
Example: Implement workshops for teachers on the ethical use of AI and recognizing when human intervention is necessary to maintain a critical perspective on AI's capabilities.
Awareness Initiatives:
Promote ongoing discussions and updates on AI technology and its ethical implications.
Example: Regularly update staff on the latest AI developments and potential impacts on their work processes.
3. Ethical and Operational Guidelines
Ethical Frameworks:
Develop and enforce ethical guidelines for AI use in educational settings.
Example: Create a district-wide policy that outlines acceptable AI practices and ensures data privacy and security, safeguarding against the risks of overreliance and misuse.
Operational Protocols:
Establish clear operational protocols to manage AI interactions and maintain human oversight.
Example: Designate specific tasks for AI assistance and require human review at critical junctures.
4. Design and Functionality Management
Design for Disruption:
Integrate features that periodically disrupt the seamless operation of AI to bring it back into users’ conscious awareness.
Example: Implement alerts or checkpoints that require human review and decision-making at critical junctures, reminding users of the AI’s tool status.
Capability Stretching:
Use AI in ways that push its computational boundaries, leading to unexpected or more complex problem-solving scenarios.
Example: Challenge AI with novel and difficult tasks that require it to stretch its capabilities, ensuring it doesn't become an invisible part of the workflow but remains a noticeable and active tool in the process.
Conclusion
Understanding AI agency as a function of human perception allows us to maintain its status as a tool rather than an agent. By drawing on Heidegger's insights, albeit reinterpreted, we can develop strategies to ensure that AI remains an augmentation of human capabilities, not a replacement. This approach is not just about safeguarding educational integrity and ethical standards; it is about rethinking our interaction with AI now before it becomes too deeply embedded in our workflows and lives. As AI continues to evolve, the stakes are high, and the need for a critical perspective on AI agency has never been more urgent.
While discussing Heidegger’s concepts, it’s critical to also address his controversial past. Heidegger’s philosophical insights come with a complex legacy, given his problematic affiliations. The full extent of this issue goes beyond the scope of a single article, but it’s important to acknowledge the stark contrast between his nuanced theoretical contributions and his ideologies, which disturbingly de-humanized individuals..
“Why Does It Matter If Heidegger Was Anti-Semitic?”
“Martin Heidegger’s Antisemitism: The Personal and the Political”
Nick Potkalitsky, Ph.D.
Check out some of my favorite Substacks:
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: An cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s Mostly Harmless Ideas: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Riccardo Vocca’s The Intelligent Friend: An intriguing examination of the diverse ways AI is transforming our lives and the world around us.
For me, tool-being vs tool-object depends upon the expertise of the tool user. In the hands of a master, tool-being means automaticity, fluency, metacognition, self-regulation, critical thinking. In the hands of a novice, tool-object is thinghood, some thing or other, wtf do I do with this. I think bottom line it takes a lot of expert guidance to develop safe, effective, and sane uses of power tools like being comfortable with AI. Growth along the tool object to tool being axis is what teachers need to monitor.
I realize you’re reading against the grain but I quibble with your interpretation of Heidegger’s use of the word “primordial.” You interpret this as if a primordial relationship with a tool is bad or risky as an agency trap with the bot winning, as if the bot can beat us at our own game if we let it treat us like a chimp, an enlightened animal, but nonetheless a chimp willing to cede agency.
I don’t see it that way. A primordial relationship is good, positive, being, not cold, useless object—almost biological like AI vision or hearing repairs—good things. I think Heidegger tells us to reach the point where the thinking required to shift a tool from readiness to use (being) is intuitive, in touch with the amygdala. Was that word even a thing?
I’m not sure about the utility of the lists. Can you identify superordinate categories and use bullet points? I’m thinking about slides and posters. Also translate to user friendly terms. Curious to hear other comments.
Excellent work!!! Keep it coming
I like this framing a lot. I've got an essay coming out soon that is titled Augmenting Intelligence, which covers a lot of the same points (probably because I reference at least three of your essays.) I just wish more people would slow down and think for a hot second.