How students are developing the capacity to experience AI collaboration and pattern recognition simultaneously—and why most of us need to get a little weird with these systems first.
The Apple paper is a complete shame. It is biased in its title, on purpose because the WWDC failure from Apple to provide any interesting thing on AI. It is not true it proves anything interesting on 'thinking' on LLMs or LRMs. And what it is worse even from those as you talking about the paper: LLMs are not AI, they are part of AI. This simplification, we 'prove' something on LLMs (or we pretend so), and everybody makes the incredible extrapolation to all AI, models, techniques etc. Shame.
Hmmmm... "the ability to experience the AI's agency without being seduced by the illusion of its intelligence" - I actually tend to think of it the other way round! Treating AI as intelligent is not as much of a risky thing to do as treating AI as if it has true agency. For me, current AI clearly shows "intelligence" - just not the same as human intelligence. Agency for me has to do with intrinsic motivations, perspectives, beliefs, goals, experiences - which current LLM-powered AI doesn't truly have. The fact that we currently have to prompt LLMs to give them their "identity" (eg "you are an experienced strategic consultant") shows how they lack true agency. They can act as "agents" in the same sense that we can set up a traditional computer program to act as an agent. In fact current AI agents are called agents because of their ability to use tools and interface with other digital services. The fact that they can take actions does not necessarily give them true "agency" (any computer program can take actions - even a basic thermostat can take actions). "Intelligence" for me is more about the capacity to reason, interpret, act, synthesise, create, take decisions, etc. LLMs can do all those things, even if they sometimes fail (as do humans). Even the tricky word "understanding" I don't really have a problem with in the context of current AI. If I give a chatbot an instruction and it acts as if it has understood it, then I don't have an issue with saying that it has "understood" my instruction. I certainly don't feel I have to ascribe any sort of consciousness, intentionality or sentience to use words like "understood" with AI. When I say "understood" - I mean that it has recognised and interpreted the information (instruction) I have given it and used that information appropriately to influence its outputs. It has extracted meaning from my prompt. I feel that "understood" is a reasonable shorthand for this process, without speculating too much about what sort of "world models" or other internal representations the AI may or may not have constructed. If we are really scared about conflating AI with human thought, then we could invent entirely new words. We could resolve to never say AI has "understood" - only that it has "grokked" the information, or something like that. We could avoid saying the model is "thinking" and instead stick to "processing".
As for how we should interact with chatbots - since they are trained to work best with natural human conversation styles, users will usually get best results by interacting in natural human language. Just as long as we remember that they are not human, and they don't "think" in the same way we do. It's important for us to learn where AI "thinking" is flawed, just as we need to learn where our fellow humans' thinking is flawed.
Thanks for your thought-provoking piece, Nick, as always!
A lot here to digest. This piece is testing out the depth and reach of Floridi's conception of AI, particularly as applied to student interactions. Floridi defines agency in terms of interactivity, autonomy, and adaptability. It is a very broad definition and covers most of the things are you are describing. To me, this is a productive re-set of the conversation. In some senses, I am far less interesting in the ultimate determination of intelligence, and much more interested in the ways we choose to act in response to AI output. I am working toward a curriculum that involves a double viewpoint. On the surface, we posit intelligence and engage conversationally. But in the depths, we focus on agency and engage computationally. To me, this is where we need to move as users of these advanced machines. Check out Floridi's full paper on agency: a useful typology for sure, perhaps not the whole story, but a good story. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5135645
Yes, I've read the Floridi paper and found it an interesting perspective. I like your idea of a "double viewpoint" curriculum. We will surely become very used to interacting with AI *as if* it has intelligence (and agency) akin to humans, but everyone should learn as much as they can about how ML models are actually constructed mathematically/stochastically, in highly artificial training environments, and how this gives rise to both strengths and weaknesses.
This is, in some ways, similar to my own approach, which I named “parasocial machine kayfabe”.
Unpacking that concept briefly, it means that the machine can’t care for me or truly even know about me in any way understandable to a human.
But, there is value to be found in behaving as though it can. Whatever type of intelligence it might or might not have, it uses human language convincingly and so the imagination is a tool that becomes useful for managing the interaction.
Where I disagree is that I see LLMs as having intelligence, albeit an inhuman variety of this, but little agency. It gives me what I ask for, more or less. It adapts to my tone. It doesn’t seem to care or even notice when I ignore its questions or change the subject abruptly.
Exactly - for me this is the more intuitive and helpful way to treat current AI systems. They are intelligent, but that intelligence is very different to human intelligence. But they do not have "agency" in the sense that we would normally mean (intentionality, goals, beliefs, motivations, experiences, etc). I feel that it is more of a stretch to broaden our definition of "agency" to include AI than it is to broaden our definition of "intelligence" to include AI. It may be that a lot of features of "agency" can only arise from being a living creature with feelings, drives, instincts and social relationships/dependencies.
Exactly. I’d say that embodied aspects of human life often drive agency, like biological drives. Also intuition, which is, in my experience, not well modeled by AI.
Love the engagement here. You both are right to question this description of AI. It is very counterintuitive, particularly in light of the way these conversational dynamics have come to create a certain ethos and explanatory system around this current version of AI. As I say above, this piece is an attempt to move from a concept to different kinds of engagement patterns. I am starting to think of reading AI text as analogous to reading a good work of realistic fiction. Part of the value is becoming immersed in the mimetic illusion, falling into the sway of the conversation, positing an intelligence that can match my responses in its own idiosyncratic way. But we are seeing the dangers all around--grief bots, hype about AGI, after-life bots---of confusing this one layer or epiphenomenon with the whole experience. After two years of conversing with AIs, I am ready to analyze the deeper architecture. Why this particular response? How do I function as the antecedent for the kinds of output I am receiving? This to me feels like real AI literacy.
I really appreciate this framing. It aligns with what I’m exploring through my CRAFT framework—how AI can help leaders and educators design systems that center clarity, care, and equity. The point isn’t whether AI reasons like we do. It’s how we can use it to support human judgment, lighten cognitive load, and make space for deeper connection. Thanks for naming this so clearly.
The Apple paper is a complete shame. It is biased in its title, on purpose because the WWDC failure from Apple to provide any interesting thing on AI. It is not true it proves anything interesting on 'thinking' on LLMs or LRMs. And what it is worse even from those as you talking about the paper: LLMs are not AI, they are part of AI. This simplification, we 'prove' something on LLMs (or we pretend so), and everybody makes the incredible extrapolation to all AI, models, techniques etc. Shame.
Hmmmm... "the ability to experience the AI's agency without being seduced by the illusion of its intelligence" - I actually tend to think of it the other way round! Treating AI as intelligent is not as much of a risky thing to do as treating AI as if it has true agency. For me, current AI clearly shows "intelligence" - just not the same as human intelligence. Agency for me has to do with intrinsic motivations, perspectives, beliefs, goals, experiences - which current LLM-powered AI doesn't truly have. The fact that we currently have to prompt LLMs to give them their "identity" (eg "you are an experienced strategic consultant") shows how they lack true agency. They can act as "agents" in the same sense that we can set up a traditional computer program to act as an agent. In fact current AI agents are called agents because of their ability to use tools and interface with other digital services. The fact that they can take actions does not necessarily give them true "agency" (any computer program can take actions - even a basic thermostat can take actions). "Intelligence" for me is more about the capacity to reason, interpret, act, synthesise, create, take decisions, etc. LLMs can do all those things, even if they sometimes fail (as do humans). Even the tricky word "understanding" I don't really have a problem with in the context of current AI. If I give a chatbot an instruction and it acts as if it has understood it, then I don't have an issue with saying that it has "understood" my instruction. I certainly don't feel I have to ascribe any sort of consciousness, intentionality or sentience to use words like "understood" with AI. When I say "understood" - I mean that it has recognised and interpreted the information (instruction) I have given it and used that information appropriately to influence its outputs. It has extracted meaning from my prompt. I feel that "understood" is a reasonable shorthand for this process, without speculating too much about what sort of "world models" or other internal representations the AI may or may not have constructed. If we are really scared about conflating AI with human thought, then we could invent entirely new words. We could resolve to never say AI has "understood" - only that it has "grokked" the information, or something like that. We could avoid saying the model is "thinking" and instead stick to "processing".
As for how we should interact with chatbots - since they are trained to work best with natural human conversation styles, users will usually get best results by interacting in natural human language. Just as long as we remember that they are not human, and they don't "think" in the same way we do. It's important for us to learn where AI "thinking" is flawed, just as we need to learn where our fellow humans' thinking is flawed.
Thanks for your thought-provoking piece, Nick, as always!
A lot here to digest. This piece is testing out the depth and reach of Floridi's conception of AI, particularly as applied to student interactions. Floridi defines agency in terms of interactivity, autonomy, and adaptability. It is a very broad definition and covers most of the things are you are describing. To me, this is a productive re-set of the conversation. In some senses, I am far less interesting in the ultimate determination of intelligence, and much more interested in the ways we choose to act in response to AI output. I am working toward a curriculum that involves a double viewpoint. On the surface, we posit intelligence and engage conversationally. But in the depths, we focus on agency and engage computationally. To me, this is where we need to move as users of these advanced machines. Check out Floridi's full paper on agency: a useful typology for sure, perhaps not the whole story, but a good story. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5135645
Yes, I've read the Floridi paper and found it an interesting perspective. I like your idea of a "double viewpoint" curriculum. We will surely become very used to interacting with AI *as if* it has intelligence (and agency) akin to humans, but everyone should learn as much as they can about how ML models are actually constructed mathematically/stochastically, in highly artificial training environments, and how this gives rise to both strengths and weaknesses.
This is, in some ways, similar to my own approach, which I named “parasocial machine kayfabe”.
Unpacking that concept briefly, it means that the machine can’t care for me or truly even know about me in any way understandable to a human.
But, there is value to be found in behaving as though it can. Whatever type of intelligence it might or might not have, it uses human language convincingly and so the imagination is a tool that becomes useful for managing the interaction.
Where I disagree is that I see LLMs as having intelligence, albeit an inhuman variety of this, but little agency. It gives me what I ask for, more or less. It adapts to my tone. It doesn’t seem to care or even notice when I ignore its questions or change the subject abruptly.
Exactly - for me this is the more intuitive and helpful way to treat current AI systems. They are intelligent, but that intelligence is very different to human intelligence. But they do not have "agency" in the sense that we would normally mean (intentionality, goals, beliefs, motivations, experiences, etc). I feel that it is more of a stretch to broaden our definition of "agency" to include AI than it is to broaden our definition of "intelligence" to include AI. It may be that a lot of features of "agency" can only arise from being a living creature with feelings, drives, instincts and social relationships/dependencies.
Exactly. I’d say that embodied aspects of human life often drive agency, like biological drives. Also intuition, which is, in my experience, not well modeled by AI.
Love the engagement here. You both are right to question this description of AI. It is very counterintuitive, particularly in light of the way these conversational dynamics have come to create a certain ethos and explanatory system around this current version of AI. As I say above, this piece is an attempt to move from a concept to different kinds of engagement patterns. I am starting to think of reading AI text as analogous to reading a good work of realistic fiction. Part of the value is becoming immersed in the mimetic illusion, falling into the sway of the conversation, positing an intelligence that can match my responses in its own idiosyncratic way. But we are seeing the dangers all around--grief bots, hype about AGI, after-life bots---of confusing this one layer or epiphenomenon with the whole experience. After two years of conversing with AIs, I am ready to analyze the deeper architecture. Why this particular response? How do I function as the antecedent for the kinds of output I am receiving? This to me feels like real AI literacy.
I really appreciate this framing. It aligns with what I’m exploring through my CRAFT framework—how AI can help leaders and educators design systems that center clarity, care, and equity. The point isn’t whether AI reasons like we do. It’s how we can use it to support human judgment, lighten cognitive load, and make space for deeper connection. Thanks for naming this so clearly.