Where Should Student AI Literacy Live?
Let's consider three competing models for Student AI Literacy initiatives in the 25-26 school year!
Thank you, my amazing readers, for the thoughtful feedback on recent posts—your responses have been invaluable as I work through these ideas about student AI literacy. This project matters deeply to me, and I’m finding my thinking shifts significantly between posts as new perspectives emerge. Consider this post—and the ones that follow—as snapshots of a mind in motion, capturing moments in the development of what I hope will become a more complete framework.
While school boards debate AI policies and teachers argue about ChatGPT in classrooms, millions of teenagers are already living in an AI-integrated world. They’re asking Claude for relationship advice, using ChatGPT to brainstorm college essays, and consuming AI-generated content across social media without a second thought. The question isn’t whether students should use AI—they already are. The question is whether schools will teach them to use it thoughtfully, or leave them to figure it out alone.
Meanwhile, districts are scrambling to answer a deceptively simple question: Where should student AI literacy live? Should it live in the computer science department, where teachers understand the technology? In English classes, where questions of authorship and voice feel most urgent? In library media centers, where information literacy has always been the focus? Or perhaps in a new, standalone curriculum that treats AI literacy like driver’s education—a discrete skill set that students master once and apply everywhere?
This isn’t just an academic planning question. It’s a territorial battle with real consequences. Computer science teachers argue they’re the only ones qualified to explain algorithms and bias. English teachers insist they’re already overwhelmed without adding AI evaluation to their plate. Librarians want to expand their digital citizenship role but worry about bandwidth. And administrators, facing pressure to “do something” about AI, often default to the path of least resistance: find one department to own the problem.
Out of this chaos, three distinct approaches are emerging. The Isolation Model treats AI literacy like digital citizenship—something specialists handle in designated spaces so regular teachers don’t have to. Put it in the library, create an AI literacy course, find the “connector zones” where a few dedicated educators can deliver consistent messages to all students. It’s clean, trackable, and doesn’t require every teacher to become an AI expert overnight.
The Distribution Model takes the opposite approach: everyone does a little bit of the work. Every teacher adapts core AI literacy concepts to their unique context. Math teachers address AI-generated solutions, history teachers tackle AI bias in sources, art teachers explore AI creativity. It promises rich, contextualized learning where students engage with AI literacy in complex, discipline-specific ways.
Both approaches sound reasonable in theory. Both are failing in practice.
The Isolation Model fails because AI literacy isn’t like driver’s education—a discrete skill you learn once and apply everywhere. When students encounter AI-generated historical sources in social studies class, they need more than generic “check your sources” advice. They need to understand how AI reproduces historical bias, how to cross-reference claims against primary documents, how to recognize when AI fills gaps in the historical record with plausible-sounding fiction. The librarian who taught them to “be skeptical of AI” in September can’t be there in March when they’re evaluating AI-generated content about the Civil War.
The Distribution Model fails because it asks every teacher to become an AI literacy expert while ignoring the reality of teacher capacity and resistance. The history teacher who refuses to acknowledge AI exists in their classroom doesn’t magically develop AI evaluation skills through professional development. Meanwhile, the enthusiastic math teacher who embraces AI tutoring may inadvertently teach students to trust AI-generated solutions without verification, directly contradicting the critical thinking skills taught in English class.
But there’s a third way emerging from districts willing to acknowledge both the necessity of coordination and the reality of institutional constraints: the Hybrid Model.
In Dayton Public Schools, I’ve worked as a consultant to help the district develop this hybrid approach. Over the summer, we built an AI literacy scope and sequence that aligned with their infrastructure rollout, their broader mission and objectives for AI implementation, and their evolving understanding of developmentally appropriate AI use cases. Rather than mandating uniform lessons, the district created grade-banded, teacher-led development processes. Within each band, teachers from different disciplines collaborate to create contextualized lessons that work in isolation or distribution—math teachers develop AI literacy lessons that are authentically mathematical, history teachers create lessons that are genuinely historical, all while reinforcing the same foundational concepts.
This matters because it acknowledges a crucial truth: AI literacy isn’t about learning to use specific tools or follow generic guidelines. It’s about developing critical thinking habits that transfer across contexts where AI-generated content appears. And AI-generated content is appearing everywhere.
Because here’s what the territorial debates miss: AI literacy doesn’t need to find a home in schools. It’s already moved in. Every time a student submits a research paper with AI-generated sources, brings an AI-written college essay draft to their counselor, or references information they got from ChatGPT during class discussion, they’re requiring AI literacy from their teachers whether those teachers want to provide it or not.
The teachers who insist they won’t “do AI” in their classrooms aren’t preserving their autonomy—they’re surrendering it. When students bring AI-generated historical analysis to history class, the teacher who refuses to engage with its evaluation isn’t avoiding AI literacy; they’re teaching it implicitly. They’re teaching students that AI-generated content doesn’t warrant critical examination, that sources don’t need verification, that the distinction between human and artificial reasoning doesn’t matter. These aren’t pedagogically neutral choices.
Meanwhile, their students are getting their most consequential AI literacy education from the AI systems themselves. ChatGPT is teaching them what questions to ask and how to refine prompts. Claude is modeling research strategies and argument construction. These AI systems aren’t neutral tutors—they’re actively shaping how students think about information, authority, and knowledge construction.
This is why the question “Where will AI literacy live?” is the wrong question. The right question is: “How will we coordinate the AI literacy that’s already happening?” Students are learning about AI interaction from every AI system they use, about AI evaluation from every teacher who does or doesn’t address AI-generated content, about AI’s role in knowledge construction from every assignment where AI assistance is permitted, prohibited, or ignored.
The solution isn’t finding AI literacy a permanent address—it’s acknowledging that it’s already everywhere and needs systematic coordination. This requires moving beyond territorial thinking toward institutional frameworks that can ensure coherent skill development across the contexts where students actually encounter AI.
Districts can continue the current collision course: isolated AI literacy courses that students never connect to disciplinary work, subject-area teachers making contradictory decisions about AI use, and students developing critical thinking habits in one context that they abandon in another. Or they can embrace coordination models that respect both institutional realities and student needs.
Nick Potkalitsky, Ph.D.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy.



I see now why you have become interested in Writing Across the Curriculum as an institutional innovation in universities which has enjoyed both practical on-the-ground improvements in learning and administrative support in the form of physical, human, and cultural resources. A root problem these past three years has been the assumption that AI is a teaching issue, not a literacy issue. The history of WAC shows us that your enactment of a distribution strategy of collective collaborative local experience-based professional learning is the best visible approach to take. I think just intuitively that AI in the writing classroom in high schools has lost some of its o-no-we’re-being-invaded qualities because what you are describing has been happening informally based on the pedagogical runoff from WAC. I’m concerned the Reading Across the Curriculum (RAC) has never found the administrative backing vis a vis Human Resources: there is no already existing substrate in school culture to support learning about the game-changing opportunities of AI in comprehension developmental learning. Where WAC has succeeded in planting a durable and well-grasped sense of composition as a core tool of learning (especially distributed learning), we have an army of teachers with impoverished understandings of comprehension. I’d love to hear more from you about how teachers might come to understand the affordances of AI to help adolescents strengthen their approach to a more robust, less standardized view of reading comprehension, which historically has limited their ability to transform reading behaviors on the scale (admittedly still in its infancy) of writing. Forcing students to comprehend under a compliance regime, demanding a purist approach rooted in outdated notions like teaching vocabulary in lists and the like might undermine the fulsome integration of AI not just as permissible, but routine and essential for participation in learning communities.
Great article, Nick! Thoughtful and true! You point to important truths about AI literacy and how ubiquitous it has become. Keep up the good work.