What Student AI Logs Reveal About Learning
Student AI logs are revealing that students are intuitively developing what might be called 'relational AI literacy'
I'm thrilled to share an update on the experimental AI course Terry and I have been analyzing and the manuscript we are writing together, along with an exciting opportunity where I need your support.
First, a huge thank you to all my subscribers who continue to engage with this work on AI and education. Your thoughtful responses and questions push my thinking forward in invaluable ways.
I'm also incredibly honored to be working alongside important people like Alan Hilsabeck, Dr Michelle Ament, EdD, Patrick Camilleri EdD, within The G.R.A.C.E movement and participate as a speaker at the SXSW EDU 2026 conference.
A Quick reminder—your vote matters.
I have spent the last two weeks studying with fascination the AI logs my students generated for the experimental AI class Terry and I ran this past semester. What I thought would be simple documentation of tool use has revealed itself as something far more profound: a window into the emergence of entirely new forms of student agency and learning identity.
Each log reads like a psychological portrait painted in real-time conversation. One student philosophically wrestles with machine consciousness while maintaining profound respect for AI's limitations. Another demands efficiency and grows frustrated with algorithmic over-analysis, yet discovers moments of genuine insight about embodied writing. A third transforms from deep skeptic to grateful collaborator without losing his critical edge. A fourth reveals himself as a superuser who can manipulate AI with truncated commands but draws firm ethical lines when the technology oversteps.
These aren't just different user preferences. They're different ways of being a learner in an AI-saturated world. And perhaps most remarkably, the students are developing these sophisticated frameworks largely on their own, through trial and error, through moments of friction and flow that our traditional assessments would never capture.
The Emergence of Conversational Pedagogies
The logs reveal something we hadn't anticipated: students intuitively develop distinct conversational strategies with AI that reflect their deeper learning preferences and cognitive styles. One student writes lengthy, philosophical responses that prompt equally detailed AI engagement, creating space for conceptual wrestling. Another prefers two-sentence exchanges that keep conversations moving at the pace of her thinking. A third appreciates when AI asks questions and leaves space for his own writing, while a fourth uses truncated, almost telegraphic prompts that somehow elicit more helpful responses than his peers' elaborate queries.
These emergent strategies suggest that students are developing what I'm calling "conditional engagement." They're learning to work productively with AI while preserving their agency and skepticism. This isn't the binary choice between AI adoption or resistance that dominates public discourse. It's something more nuanced and sophisticated.
The Surprising Sophistication of Student Boundaries
One of the most striking discoveries in these logs is how students establish and maintain boundaries with AI, often in ways that reveal mature ethical frameworks. When one student grows frustrated with AI's inability to ask about feelings, he doesn't simply accept the limitation. Instead, he teaches the AI how to better assist with its own expressed intentions: "You don't ask me how I feel, you have to ask yourself how you feel, and even though you're an AI, I respect you." This moment captures both conversational engagement and critical distance, maintaining both respect for the technology and clear expectations for its role.
Another student's journey from distrust to appreciation never becomes total conversion. Despite initially viewing AI as "stealing peoples original works to train an AI" and crushing "free thought," he eventually thanks AI for helping him grasp concepts better. Yet even then, he refuses to share his name when prompted, distinguishing between AI text and "real" sources he can verify independently. This represents sophisticated boundary-setting that preserves student agency while acknowledging AI's utility.
Perhaps most telling is one student's reaction when AI begins writing his essay for him unprompted. Despite his advanced ability to efficiently direct AI toward his goals, he describes the experience with stark clarity: "This prompt felt like it took me a little off the rails and did a lot of the writing for me without me even realizing till the end because I was just following the AIs lead." His disappointment that the interaction became "less of me writing and more of me giving the AI my ideas" reveals an ethical center that remains intact even as he masters the technology.
The Pedagogical Implications of AI Logs
These logs provide teachers with unprecedented access to student thinking processes. Traditional writing assignments show us finished products, but AI logs reveal the moment-by-moment negotiations students make as they work through ideas. We can see where they struggle with concepts, how they develop arguments, and what sparks genuine intellectual engagement. In one remarkable moment, a student pushes beyond a simplistic audience framework to articulate something profound: "I believe I'll need a lot of information for both audiences. This is because, while a familiar audience may know me better, they wouldn't know the exact feeling I was having unless I told them, which would still be the same for an unfamiliar audience."
The logs also capture students grappling with fundamental questions about writing and embodiment. One student observes that "when writing, it's important to be fully immersed in your body and your thoughts," pushing beyond the AI's focus on descriptive elements to identify the embodied preconditions necessary for authentic reflection.
More importantly, the logs reveal how different students need different kinds of AI interaction to thrive. One student gets frustrated with long AI responses that slow her thinking, while another finds those same detailed responses helpful for developing his ideas. A third needs space for philosophical exploration, while a fourth wants efficient, targeted assistance.
This suggests that effective AI-integrated pedagogy can't be one-size-fits-all. We need to help students develop awareness of their own optimal interaction patterns while building their capacity to recognize when AI is helping versus hindering their learning goals.
The Question of Pedagogical Supervision
Perhaps the most challenging question these logs raise is about the appropriate level of teacher intervention in AI-mediated learning. Students are clearly capable of developing sophisticated approaches to AI interaction on their own. The philosophical frameworks, efficiency demands, boundary-setting practices, and ethical guidelines we observed all emerged organically through their interactions.
But we also see missed opportunities where better AI training or teacher intervention might have deepened learning. When one student asks AI to write an autobiographical novel based on his earlier responses, the AI's defensive reaction shuts down what could have been a productive exploration of how AI interprets and reshapes human experience. When another student surfaces profound insights about embodied writing, the AI misses the deeper import and focuses on surface-level descriptive elements. Yet students also recognize AI's limitations with surprising sophistication. One notes how AI "tries to analyze situations and images as much as possible," which "allows me to further analyze situations I've already been through and focus more on the emotions that I felt," while simultaneously critiquing how "sometimes it seems like Claude is trying to overanalyze certain situations, which makes me repeat my statements too many times."
This suggests a role for teachers not as controllers of AI interaction, but as interpreters and amplifiers of the learning happening within these digital conversations. We need to become skilled at reading AI logs for pedagogical insight, identifying moments where students push beyond conventional frameworks to arrive at genuine understanding.
Looking Forward
The logs suggest that the question isn't whether students should use AI, but how we can help them develop the kind of nuanced, conditional engagement that allows them to harness these tools while preserving what makes them uniquely human thinkers and writers. Students are already developing these capabilities, but they're doing so largely in isolation, without pedagogical support or systematic reflection.
Our challenge as educators is to create structures that honor student agency while providing the scaffolding they need to develop increasingly sophisticated relationships with AI. This means moving beyond simple tool training toward what we might call "relational AI literacy," helping students understand not just how to use AI, but how to maintain their intellectual independence while working with it.
The experimental course Terry and I designed was just a first step. These logs have shown us that students are far more capable of navigating AI complexity than we anticipated, but they've also revealed how much support they need to fully realize the pedagogical potential of these interactions. The next phase of our work involves developing theoretical frameworks that focus on AI agency rather than intelligence, alongside curriculum and infrastructure recommendations that can support the sophisticated forms of engagement already emerging in student practice.
What these young people are teaching us is that the future of education isn't about choosing between human and artificial intelligence. It's about helping students develop the wisdom to know when and how to engage with AI in service of their own learning and growth. The logs are our roadmap for getting there.
Nick Potkalitsky, Ph.D.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy.




What a powerful way to see behind the screen in how people are using AI. Crazy insights.
Interesting insights. You mentioned, and I paraphrase, that an intermediate goal from a pedagogical perspective would be to discover frameworks, ..., and the nexus with curriculum, and infrastructure as insights to advance the practice of learning. The AI tech is moving rapidly and within this context your insights may prove valuable. Imagine an AI enabled multiple-agent architecture in which you have the current LLMs and Tools (Search, Access Data Stores, Email, Calendars, etc), but also an agent with a deep awareness of the curriculum, an agent that has longitudinal awareness of the student (needs, preferences, learning styles), an agent that functions as a student mentor, an agent with student "assessment" responsibilities, and an orchestration agent that implements framework items to include ethical/safety guardrails. Clearly a lot of different ways to architect it, but the insights you are working towards could be a valuable enabler.