AI in Schools: When Every Class Teaches Something Different
Which version of AI literacy are your students engaging with in your class? How does your approach complement or conflict with other teachers in your building? Why does this matter?
It's 9:45 AM on a Tuesday, and seventh-grader Maya is finishing her English class where Ms. Rodriguez just spent fifteen minutes explaining why using ChatGPT for essays constitutes plagiarism. "AI can't think for you," she emphasized. "Your writing must be your own voice."
At 10:30 AM, Maya walks into Mr. Chen's science class, where he demonstrates how to use ChatGPT to generate hypotheses for their climate change research project. "This is a powerful tool for brainstorming," he explains. "Let's see what it suggests about ocean acidification."
By 11:15 AM, Maya is in social studies with Mrs. Washington, who has loaded an AI tutoring bot onto every student's Chromebook. "This will help you practice argumentative writing," she announces. "The AI will give you feedback and suggestions as you draft your essays about the Revolutionary War."
Maya isn't alone in her confusion. Across the country, students are receiving contradictory messages about AI's role in education—sometimes within the same school day. While educators scramble to develop AI policies, three distinct and conflicting approaches are emerging simultaneously:
The Three-Pathway Problem
Pathway 1: Core AI Literacy Schools focus on teaching foundational concepts about how AI works: algorithms, training data, bias, and limitations. These programs often emerge from computer science departments or forward-thinking library media specialists. Students learn that AI predicts patterns rather than understanding meaning, and they practice identifying hallucinations and bias in AI outputs.
Pathway 2: Disciplinary-Specific Integration embeds AI literacy within subject areas. English teachers focus on authorship and voice, math teachers on verification of solutions, science teachers on evidence and reproducibility. These efforts often develop organically as individual teachers experiment with AI tools in their classrooms.
Pathway 3: Conversational AI as Instructional Authority positions AI chatbots as teaching assistants or tutoring partners. Students receive direct instruction from AI systems on content, skills, and problem-solving strategies. These tools are often adopted at the district level for their promise of personalized learning and teacher workload reduction.
The problem isn't that any of these approaches is inherently wrong. The problem is that they're happening simultaneously without coordination, creating what I call "profound asymmetries" in student experience and understanding.
The Consequences of Collision
This uncoordinated rollout creates several critical problems:
Mixed Messages About Authority: Students like Maya receive fundamentally different guidance about whether AI is forbidden, required, or somewhere in between. In one class, AI assistance signals academic dishonesty. In another, it's a required research tool. In a third, it's positioned as an authoritative instructor.
Inconsistent Skill Development: A student who learns to fact-check AI outputs in their core literacy class may never apply that skill when using an AI tutor for math homework. The critical thinking habits developed in one context don't transfer to others.
Teacher Confusion and Burden: Individual educators are left to navigate AI integration without institutional guidance. Some embrace experimentation while others avoid AI entirely. The result is wildly inconsistent student preparation and teacher stress.
Undermined Learning Objectives: When students use conversational AI positioned as authoritative while simultaneously learning to critique AI outputs as unreliable, the cognitive dissonance can undermine both sets of learning objectives.
Why This Matters Now
This isn't just a temporary growing pain. Districts are making infrastructure investments and policy decisions that will shape AI education for years. Without systematic coordination, we risk entrenching these contradictions rather than resolving them.
Consider what happens when Maya reaches high school. If her middle school experience taught her that AI is simultaneously forbidden, required, and authoritative, how will she navigate college applications that may require disclosure of AI assistance? How will she approach workplace environments where AI collaboration is expected but critical evaluation is essential?
The stakes extend beyond individual student confusion. Districts investing in separate AI literacy curricula, disciplinary integration efforts, and conversational AI platforms may find these investments working at cross-purposes. Teachers receiving conflicting professional development messages may retreat from AI integration altogether rather than risk mixed messages or policy violations.
The Need for Systematic Approach
Other educational innovations have faced similar coordination challenges. The most instructive parallel comes from Writing Across the Curriculum (WAC), which emerged in the 1970s to address the problem that writing instruction was isolated in English departments while writing demands existed across all subjects.
WAC succeeded where it developed systematic, institution-wide approaches that coordinated efforts across disciplines while respecting subject-area expertise. It failed where it remained ad hoc or relied solely on individual teacher initiative.
The AI literacy challenge is remarkably similar: a cross-cutting competency that doesn't naturally fit within existing disciplinary boundaries, requiring coordination across departments and grade levels, with high stakes for both student learning and institutional coherence.
But unlike WAC's gradual emergence over decades, AI integration is happening at compressed speed with massive commercial and policy pressures. Districts don't have the luxury of slow experimentation. They need frameworks that can coordinate multiple pathways without sacrificing the legitimate strengths of each approach.
What Comes Next
The solution isn't to eliminate any of the three pathways. Core AI literacy provides essential foundational knowledge. Disciplinary integration ensures skills transfer to authentic contexts. Conversational AI offers genuine pedagogical benefits when used appropriately.
The challenge is coordination: How do we ensure these approaches reinforce rather than undermine each other? How do we maintain the benefits of each while eliminating the contradictions that confuse students and fragment learning?
The answer requires looking beyond curriculum to institutional design. We need frameworks that can orchestrate systematic AI literacy development across pathways, grade levels, and subject areas—while building on proven approaches to educational innovation rather than starting from scratch.
In my next article, I'll explore what Writing Across the Curriculum teaches us about implementing such systematic approaches, and why understanding WAC's successes and failures is essential for anyone serious about coherent AI education.
Nick Potkalitsky, Ph.D.
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy.
But who is actually teaching the core student AI literacy skills? Or are students being asked to do that work for themselves inside a regulation-first integration framework?
You have to start somewhere. Letting another year pass without student facing AI literacy is absurd at this point, and dangerous as many of the student misuse cases are indicating. Teachers need thoughtful case uses inside disciplines to integrate into their curriculum—at least as a possible point of complication. Many districts are asking for this assistant right now. And teachers don’t have time to build it themselves. The naysayers will continue to resist, but in my training work, that subset is gradually shrinking. It is time to start building and leading. I get that your school still has a lot of resisters. But times are changing elsewhere.