AI Models Don't Reason. So What?
How students are developing the capacity to experience AI collaboration and pattern recognition simultaneously—and why most of us need to get a little weird with these systems first.
If you find value in our work and want to help shape the future of AI education and awareness, consider becoming a paid subscriber today. To make it easy, we’re offering a forever 20% discount through this special link:
Your support directly helps us broaden our reach and deepen our impact. Let’s build this together.
Apple's recent research concluding that AI reasoning models are fundamentally sophisticated pattern-matching algorithms has sent familiar ripples through educational circles. Once again, we're witnessing the predictable cycle: initial AI enthusiasm followed by the stark realization that these systems don't "think" the way we do, leading to wholesale dismissal of their educational significance. Some educators are declaring AI useless for learning, while others double down on treating these systems as thinking partners.
Both responses miss the point entirely. The revelation that AI models are "just" pattern recognition isn't a limitation to overcome—it's the key to understanding their actual educational potential. Instead of asking whether AI can reason like humans, we should be asking how students can learn to work strategically with systems that exhibit genuine agency without intelligence.
The Seductive Conversation
There's something almost irresistible about the way these AI systems talk back to us. They respond with such apparent thoughtfulness, such seeming consideration of our ideas, that it's easy to slip into treating them as intellectual partners. I've caught myself doing it countless times—falling into the rhythm of conversation, feeling heard and understood by what I know, intellectually, is sophisticated pattern matching.
This isn't a failure of critical thinking. It's the natural human response to encountering what Luciano Floridi calls "agency without intelligence"—systems that can act, respond, and influence without truly understanding. As Floridi argues in his recent work on artificial agency, we face a fundamental choice: either expand our conception of intelligence to include artificial forms, or expand our understanding of agency to encompass forms that lack cognition and mental states. The conversational interface is so compelling that even those of us who study these systems professionally find ourselves momentarily forgetting we're interacting with what Floridi describes as "purpose-bounded computational agents."
But here's where it gets interesting: the most effective student users I observe aren't trying to resist this conversational pull entirely. Instead, they're developing something more sophisticated—the ability to experience both the collaboration and the pattern recognition simultaneously.
The Paradox of Dual Awareness
Let me share what I witnessed last semester. A student was working on analyzing fieldwork data, using an AI system to generate three different analytical approaches. As I watched, something remarkable happened: she engaged with the AI's suggestions as if they were coming from a thoughtful colleague—reading them carefully, considering their merits, building on their insights. But simultaneously, she maintained complete awareness that she was working with pattern recognition, not intelligence.
"This one feels too mechanical," she said about the first approach, then paused. "Well, they're all mechanical, but this one doesn't give me anything to work with." She wasn't dismissing the AI's agency—its ability to surface patterns and propose alternatives—but she also wasn't mistaking that agency for understanding.
What she had developed was dual awareness: the capacity to experience the collaboration while remaining conscious of its algorithmic nature. She could engage with the AI's responses as intellectually generative without losing sight of what they actually were—what Floridi would call sophisticated "pattern recognition and matching, without crossing into genuine comprehension."
The Necessary Estrangement
But here's the crucial insight: every student I've observed who achieves this balanced dual awareness went through a period of what I'm calling "necessary estrangement" first. They had to deliberately disrupt their natural conversational responses to these systems.
I've seen students develop almost ritualistic practices: referring to the AI as "the program" or "the system," deliberately using formal prompting language instead of casual conversation, or even pausing after each AI response to explicitly remind themselves they're working with pattern matching, not intelligence.
One student told me she spent weeks forcing herself to think "this is code responding to code" every time she interacted with ChatGPT. It felt awkward and stilted, she said, but it was necessary. She needed to break the conversational spell before she could engage strategically.
This estrangement isn't permanent—it's developmental. Students who push through this deliberately uncomfortable phase often emerge with something remarkable: the ability to experience the AI's agency without being seduced by the illusion of its intelligence.
The Collaborative Reality
What emerges from this process isn't the absence of collaboration, but a more sophisticated understanding of what collaboration with algorithmic agents actually involves. These students learn to work with systems that can initiate responses, propose alternatives, and shape intellectual work—genuine agency—while maintaining clarity about the fundamental difference between agency and intelligence. In Floridi's framework, they're learning to collaborate with agents that exhibit "interactivity, autonomy, and adaptability" within their designed parameters, but without the consciousness or intentionality that characterizes human agency.
They develop what I'm calling "algorithmic literacy"—the capacity to collaborate with agents that possess agency but lack intelligence while preserving intellectual sovereignty. They prompt with precision, evaluate outputs skeptically, and maintain clear boundaries around what they will and won't delegate.
Most importantly, they learn to recognize when their own thinking cannot and should not be delegated. They use AI for information gathering, initial brainstorming, and surfacing patterns, but they maintain ownership over meaning-making, critical evaluation, and creative synthesis.
The Framework: Agency Literacy
What I'm observing is the emergence of agency literacy—the ability to strategically manage one's intellectual agency when working with AI systems. This operates across three dimensions:
Engage: Recognizing when a task requires full human cognitive commitment. Students learn to identify moments when their personal experience, values, or creative insight cannot be delegated.
Distribute: Strategically dividing cognitive labor between human and algorithmic processes while maintaining authority over meaning-making and leveraging AI's pattern recognition capabilities.
Suspend: Temporarily setting aside personal agency to explore algorithmic perspectives before reasserting human judgment—engaging AI outputs as thought experiments rather than authoritative answers.
Getting Weird to Get Wise
The path to this sophisticated agency literacy seems to require getting a little weird with these systems first. The conversational interface is so persuasive that most of us—myself included—need to deliberately disrupt our natural responses before we can develop more balanced approaches.
This might mean talking to AI systems like they're programs rather than people, at least initially. It might mean developing formal prompting protocols that remind us we're working with pattern recognition. It might mean pausing after each interaction to explicitly acknowledge what just happened algorithmically.
These practices feel awkward because they go against our social conditioning. But they may be necessary stepping stones toward a more sophisticated relationship with algorithmic agents—one that can experience their genuine agency without mistaking it for intelligence.
The Educational Imperative
We're not just teaching students to use AI tools; we're teaching them to navigate a world populated by agents that can act without understanding, respond without reasoning, and influence without intention. This requires new forms of literacy that go beyond traditional digital skills.
The reconceptualization of AI as artificial agency, as Floridi argues, means that its development should "acknowledge and work within its distinct agency type rather than attempting to replicate human (or even just animal) intelligence." This helps avoid what he calls "anthropomorphic fallacies while maintaining realistic expectations about its capabilities and limitations." Students who achieve this don't become less intellectually sophisticated through AI interaction—they become more so.
Agency literacy may be the most important competency we're not yet teaching systematically. But the students who are developing it naturally are showing us the way: through temporary estrangement toward balanced dual awareness, learning to dance with algorithmic agency while keeping their intellectual feet firmly planted in human ground.
What does your own relationship with AI's conversational interface look like? Have you noticed yourself getting seduced by the conversation? What practices help you maintain that dual awareness?
Nick Potkalitsky, Ph.D.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: An cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s Mostly Harmless Ideas: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Riccardo Vocca’s The Intelligent Friend: An intriguing examination of the diverse ways AI is transforming our lives and the world around us.
Jason Gulya’s The AI Edventure: An important exploration of cutting edge innovations in AI-responsive curriculum and pedagogy.
The Apple paper is a complete shame. It is biased in its title, on purpose because the WWDC failure from Apple to provide any interesting thing on AI. It is not true it proves anything interesting on 'thinking' on LLMs or LRMs. And what it is worse even from those as you talking about the paper: LLMs are not AI, they are part of AI. This simplification, we 'prove' something on LLMs (or we pretend so), and everybody makes the incredible extrapolation to all AI, models, techniques etc. Shame.
Hmmmm... "the ability to experience the AI's agency without being seduced by the illusion of its intelligence" - I actually tend to think of it the other way round! Treating AI as intelligent is not as much of a risky thing to do as treating AI as if it has true agency. For me, current AI clearly shows "intelligence" - just not the same as human intelligence. Agency for me has to do with intrinsic motivations, perspectives, beliefs, goals, experiences - which current LLM-powered AI doesn't truly have. The fact that we currently have to prompt LLMs to give them their "identity" (eg "you are an experienced strategic consultant") shows how they lack true agency. They can act as "agents" in the same sense that we can set up a traditional computer program to act as an agent. In fact current AI agents are called agents because of their ability to use tools and interface with other digital services. The fact that they can take actions does not necessarily give them true "agency" (any computer program can take actions - even a basic thermostat can take actions). "Intelligence" for me is more about the capacity to reason, interpret, act, synthesise, create, take decisions, etc. LLMs can do all those things, even if they sometimes fail (as do humans). Even the tricky word "understanding" I don't really have a problem with in the context of current AI. If I give a chatbot an instruction and it acts as if it has understood it, then I don't have an issue with saying that it has "understood" my instruction. I certainly don't feel I have to ascribe any sort of consciousness, intentionality or sentience to use words like "understood" with AI. When I say "understood" - I mean that it has recognised and interpreted the information (instruction) I have given it and used that information appropriately to influence its outputs. It has extracted meaning from my prompt. I feel that "understood" is a reasonable shorthand for this process, without speculating too much about what sort of "world models" or other internal representations the AI may or may not have constructed. If we are really scared about conflating AI with human thought, then we could invent entirely new words. We could resolve to never say AI has "understood" - only that it has "grokked" the information, or something like that. We could avoid saying the model is "thinking" and instead stick to "processing".
As for how we should interact with chatbots - since they are trained to work best with natural human conversation styles, users will usually get best results by interacting in natural human language. Just as long as we remember that they are not human, and they don't "think" in the same way we do. It's important for us to learn where AI "thinking" is flawed, just as we need to learn where our fellow humans' thinking is flawed.
Thanks for your thought-provoking piece, Nick, as always!