For me, tool-being vs tool-object depends upon the expertise of the tool user. In the hands of a master, tool-being means automaticity, fluency, metacognition, self-regulation, critical thinking. In the hands of a novice, tool-object is thinghood, some thing or other, wtf do I do with this. I think bottom line it takes a lot of expert guidance to develop safe, effective, and sane uses of power tools like being comfortable with AI. Growth along the tool object to tool being axis is what teachers need to monitor.
I realize you’re reading against the grain but I quibble with your interpretation of Heidegger’s use of the word “primordial.” You interpret this as if a primordial relationship with a tool is bad or risky as an agency trap with the bot winning, as if the bot can beat us at our own game if we let it treat us like a chimp, an enlightened animal, but nonetheless a chimp willing to cede agency.
I don’t see it that way. A primordial relationship is good, positive, being, not cold, useless object—almost biological like AI vision or hearing repairs—good things. I think Heidegger tells us to reach the point where the thinking required to shift a tool from readiness to use (being) is intuitive, in touch with the amygdala. Was that word even a thing?
I’m not sure about the utility of the lists. Can you identify superordinate categories and use bullet points? I’m thinking about slides and posters. Also translate to user friendly terms. Curious to hear other comments.
Great reflections, Terry. Yes, my recent focus on AI agency as a function of perception increasingly puts me at odds with Heidegger, and you are productively bringing that tension to the surface. I wonder what Heidegger would make of tools that capture attention as intensely as our modern tech tools do. We could draw an analogy to the past: a musical instrument can be highly immersive, pulling the user into a deeply meditative state. When a string breaks, that attention is disrupted quite abruptly.
However, tools like an AI-powered song generation platform or AI social companions—designed for pure entertainment—make disengagement increasingly difficult due to the dopamine hits they provide. One might still uphold the tool-object/tool-being dichotomy at a very abstract level, perhaps qualifying it with reference to the user's intentionality or skill. While I appreciate these qualifications, it seems we're witnessing a reversal in our relationship with these "all-too-modern" tools.
In the ed-tech world, agency is promoted as a gateway to a greater degree of tool-being, yet it complexly opens the doors to overreliance and dependency. I realize I need to start backing away from these strong judgments eventually, and I am working towards that. But this concept feels crucial to explore right now. I might take another shot at it next week. I'm not satisfied with this post, but your thoughts are helping me pick up the pieces and find the direction forward.
I like this framing a lot. I've got an essay coming out soon that is titled Augmenting Intelligence, which covers a lot of the same points (probably because I reference at least three of your essays.) I just wish more people would slow down and think for a hot second.
Try this on. A tool is an object when it is not in use. It becomes a tool when it is used. Its existence, its being is manifested by the hands that use it. It inherits being from the human or chimp employing it intentionally (Vygotsky’s observation of primates using sticks to collect honey). The more expert the user is, the more nuances and precision the tool inherits, therefore taking on a productivity on behalf of the user that is more responsive, intuitive, stable.
Consider the case of the AI social companion. In the event that a user is not using it, it is an object. When the user takes it in hand or brain and has an intention to do something, the tool inherits agency and does the job—produces dopamine hits.
This scenario is the same as glue. In a tube it is an object. In the hands of a skilled furniture builder it shifts to being, and it serves its functions all the better for expert application. Glue does not change its status as a tool-object or a tool-being because the human intention is bad, or dangerous, or immoral. That’s a human problem, not a tool problem. Maybe humans need to restrict use of the tool, but such governance doesn’t change the underlying structure.
The path from tool object to being runs through the gradual release of responsibility model of pedagogy focused on tool use in epistemology and disciplinary discourses. Educators can’t solve all of the societal issues AI has ushered in.
For me, tool-being vs tool-object depends upon the expertise of the tool user. In the hands of a master, tool-being means automaticity, fluency, metacognition, self-regulation, critical thinking. In the hands of a novice, tool-object is thinghood, some thing or other, wtf do I do with this. I think bottom line it takes a lot of expert guidance to develop safe, effective, and sane uses of power tools like being comfortable with AI. Growth along the tool object to tool being axis is what teachers need to monitor.
I realize you’re reading against the grain but I quibble with your interpretation of Heidegger’s use of the word “primordial.” You interpret this as if a primordial relationship with a tool is bad or risky as an agency trap with the bot winning, as if the bot can beat us at our own game if we let it treat us like a chimp, an enlightened animal, but nonetheless a chimp willing to cede agency.
I don’t see it that way. A primordial relationship is good, positive, being, not cold, useless object—almost biological like AI vision or hearing repairs—good things. I think Heidegger tells us to reach the point where the thinking required to shift a tool from readiness to use (being) is intuitive, in touch with the amygdala. Was that word even a thing?
I’m not sure about the utility of the lists. Can you identify superordinate categories and use bullet points? I’m thinking about slides and posters. Also translate to user friendly terms. Curious to hear other comments.
Excellent work!!! Keep it coming
Great reflections, Terry. Yes, my recent focus on AI agency as a function of perception increasingly puts me at odds with Heidegger, and you are productively bringing that tension to the surface. I wonder what Heidegger would make of tools that capture attention as intensely as our modern tech tools do. We could draw an analogy to the past: a musical instrument can be highly immersive, pulling the user into a deeply meditative state. When a string breaks, that attention is disrupted quite abruptly.
However, tools like an AI-powered song generation platform or AI social companions—designed for pure entertainment—make disengagement increasingly difficult due to the dopamine hits they provide. One might still uphold the tool-object/tool-being dichotomy at a very abstract level, perhaps qualifying it with reference to the user's intentionality or skill. While I appreciate these qualifications, it seems we're witnessing a reversal in our relationship with these "all-too-modern" tools.
In the ed-tech world, agency is promoted as a gateway to a greater degree of tool-being, yet it complexly opens the doors to overreliance and dependency. I realize I need to start backing away from these strong judgments eventually, and I am working towards that. But this concept feels crucial to explore right now. I might take another shot at it next week. I'm not satisfied with this post, but your thoughts are helping me pick up the pieces and find the direction forward.
I like this framing a lot. I've got an essay coming out soon that is titled Augmenting Intelligence, which covers a lot of the same points (probably because I reference at least three of your essays.) I just wish more people would slow down and think for a hot second.
Try this on. A tool is an object when it is not in use. It becomes a tool when it is used. Its existence, its being is manifested by the hands that use it. It inherits being from the human or chimp employing it intentionally (Vygotsky’s observation of primates using sticks to collect honey). The more expert the user is, the more nuances and precision the tool inherits, therefore taking on a productivity on behalf of the user that is more responsive, intuitive, stable.
Consider the case of the AI social companion. In the event that a user is not using it, it is an object. When the user takes it in hand or brain and has an intention to do something, the tool inherits agency and does the job—produces dopamine hits.
This scenario is the same as glue. In a tube it is an object. In the hands of a skilled furniture builder it shifts to being, and it serves its functions all the better for expert application. Glue does not change its status as a tool-object or a tool-being because the human intention is bad, or dangerous, or immoral. That’s a human problem, not a tool problem. Maybe humans need to restrict use of the tool, but such governance doesn’t change the underlying structure.
The path from tool object to being runs through the gradual release of responsibility model of pedagogy focused on tool use in epistemology and disciplinary discourses. Educators can’t solve all of the societal issues AI has ushered in.