The Framework: What Systematic AI Literacy Actually Looks Like
DSAIL is built on a single repeatable move that works across all disciplines and grade levels: students encounter AI-generated content, then interrogate it through comparison with evidence
Reaching 10,000 subscribers feels like the perfect moment to release something I’ve been building with this community in mind. Over this four-part series, we’ve explored the why, the what, and the how of student AI literacy—and you’ve shaped every insight along the way. Today, as we close this series, I’m sharing the complete framework as my thank you to each of you who’ve engaged, questioned, and pushed this work forward. This is for all of us committed to preparing students for an AI-integrated world.
The coordination challenge facing AI literacy isn’t unprecedented, and neither is the solution. Fifty years of Writing Across the Curriculum implementation teaches us that sustainable educational change happens through systematic coordination that respects teacher expertise, embeds new skills in existing disciplinary work, and provides clear institutional infrastructure.
The question isn’t whether these lessons apply to AI literacy. The question is what a framework looks like that actually operationalizes them.
After working with Dayton Public Schools and analyzing successful WAC implementations, I’ve developed an approach that addresses the coordination problems I outlined in previous articles. It’s called DSAIL (Disciplinary Specific AI Literacy), and it’s designed for the constraints real schools actually face: no curriculum space for new courses, no developmental standards to build on, teachers without time to create AI curricula from scratch, and students who are using AI right now regardless of what policies say.
The Core Pedagogical Move
DSAIL is built on a single repeatable move that works across all disciplines and grade levels: students encounter AI-generated content, then interrogate it through comparison with evidence.
This isn’t about teaching students to use AI tools. It’s about teaching them to critically examine AI outputs (explanations, images, models, data analyses, procedures) within their regular coursework. The interrogation happens through disciplinary lenses:
Science: AI explanation vs. experimental data; AI-generated diagram vs. accurate scientific image; AI model prediction vs. observed results
ELA: AI summary vs. source text
Math: AI solution vs. verification method
Social Studies: AI overview vs. primary documents
World Languages: AI translation vs. authentic text
The breakthrough insight that makes this work: AI output is perspectival. AI doesn’t make random errors. Its outputs reflect patterns in training data, dominant explanations and frameworks, whose voices appear most frequently, and how users frame their prompts. Students learn to ask: What perspective is AI taking here? What’s emphasized? What’s overlooked? When would this be useful? When would it mislead?
This matters because it transforms AI literacy from a technical skill into a thinking habit. Students don’t need to understand transformer architectures or neural networks. They need to recognize that AI outputs represent particular perspectives shaped by training data and user inputs, just as historical sources represent particular perspectives shaped by their authors and contexts.
How It Works in Practice
A third-grade science class is observing plants. Students record what they notice about a plant in their classroom (leaf shape, stem characteristics, where flowers appear). Then they read an AI-generated description of that plant type and interrogate: What did AI get right? What did it miss? What can only be known through observation?
The reflection matters: “Computers guess based on patterns, but scientists check with evidence.” Single lesson, forty minutes, embedded in an existing unit. It strengthens the science standard about observation while introducing the foundational AI literacy concept that AI predicts patterns rather than observes reality.
By sixth grade, that foundation supports more complex interrogation. Students run an experiment on sugar dissolving at different temperatures. AI claims cold water dissolves faster, citing molecular density. Students compare the AI claim with their experimental data showing hot water dissolves faster. The interrogation deepens: Why did AI get this wrong? What perspective was it taking?
They discover AI gave a “textbook chemistry” explanation based on training data patterns but didn’t account for kinetic energy dominating in their specific conditions. The reflection shifts: “When is AI’s general explanation useful? When does it mislead?” They’re practicing both scientific reasoning and perspectival thinking simultaneously.
By tenth grade, students studying local ecosystems ask AI to describe the food web for their region. They compare what AI emphasized (charismatic predator-prey relationships) with what they observed (decomposers, mutualism, less visible interactions). The interrogation reveals how AI trained on popular nature writing privileges certain relationships while minimizing others. “What gets counted depends on who’s doing the counting.” They’re deepening ecology understanding while developing representation literacy.
The Standards Architecture
This approach works because it creates systematic skill development through three interlocking standards levels:
Writing & Research AI Literacy Standards provide the primary practice students engage in (the interrogation moves that work across disciplines):
K-2: Ask where information comes from (with adult prompting)
3-5: Verify AI-generated content, maintain personal voice, compare AI with sources
6-8: Cross-reference AI outputs, brainstorm critically, identify errors and bias, cite properly
9-12: Synthesize AI analysis with independent research, spot logical fallacies, verify data analysis, navigate ethical implications
Core AI Literacy Standards capture the knowledge that emerges from repeated disciplinary practice:
K-2: Recognize AI tools, practice safety, distinguish human vs. AI content
3-5: Use AI critically, understand pattern recognition, evaluate accuracy
6-8: Design ethically with AI, communicate about capabilities and limitations, analyze bias
9-12: Advanced technical literacy, ethical decision-making, strategic use
Disciplinary Standards remain the anchor. Existing state standards in science, ELA, math, and social studies provide the foundation. AI literacy strengthens what’s already being taught rather than competing with it.
Students don’t memorize AI concepts in isolated lessons. They encounter them repeatedly through disciplinary interrogation until understanding becomes durable. A science standard requiring students to “think critically to connect evidence and explanations” becomes the site where students also learn that AI explanations reflect training data patterns and must be verified against experimental evidence.
Why This Solves the Coordination Problem
Remember Maya from my first article, receiving contradictory messages about AI across her school day? DSAIL addresses that collision directly.
In Ms. Rodriguez’s English class, Maya learns that writing requires authentic voice and that AI-generated text lacks genuine perspective. In Mr. Chen’s science class, she learns to interrogate AI explanations by comparing them with experimental evidence. In Mrs. Washington’s social studies class, she practices identifying what perspectives AI emphasizes and overlooks when describing historical events.
These aren’t contradictory messages. They’re complementary applications of the same core insight: AI output is perspectival and must be evaluated against evidence. Maya learns different manifestations of the same critical thinking habit across contexts where AI actually appears.
The coordination happens through shared understanding of the core pedagogical move, consistent emphasis on verification and perspective-recognition, and disciplinary applications that reinforce rather than contradict each other. Teachers don’t need to teach identical lessons. They need to facilitate interrogation experiences that build students’ capacity to recognize when AI outputs are useful and when they mislead.
What Teachers Actually Need
DSAIL learned from WAC’s crucial lesson about teacher expertise. Successful implementation doesn’t ask teachers to become AI technical experts or abandon their disciplinary knowledge. It validates what they already do well.
Teachers need to:
Facilitate comparison activities between AI outputs and evidence
Ask follow-up questions exploring perspective and omission
Use instructional tools they already rely on (discussion, evidence evaluation, chart-making)
Connect AI literacy work to academic standards for specific disciplines
Learn AI concepts alongside their students through teaching the first lesson
Teachers don’t need to:
Master design thinking frameworks or coding
Understand neural networks or transformer architectures
Develop new curriculum from scratch
Complete extensive coursework on AI concepts
Transition to “facilitator” roles that abandon direct instruction
The key insight: You don’t need to understand how AI works technically to teach students that AI output is perspectival and must be verified against evidence. Scientists already check claims against data. Readers already evaluate sources for bias and completeness. Mathematicians already verify solutions. Historians already recognize that accounts reflect particular perspectives.
The math teacher facilitating students’ interrogation of AI-generated solutions is using mathematical reasoning they already possess. The history teacher asking what perspectives AI’s summary emphasizes or overlooks is practicing historical thinking they already teach. DSAIL positions AI literacy as an extension of disciplinary expertise rather than a replacement for it.
Implementation That Respects Institutional Realities
Districts can start small based on their actual resources and capacity:
Subject-Specific Pilot: Start with one discipline across K-12 grade bands. Build exemplar lessons. Train curriculum leaders as lesson developers. Use results to expand to other subjects.
Grade-Band Focus: Perfect one grade band across all subjects. Build vertical coherence once the model is established. Expand up and down from there.
Minimum Viable: Provide lesson templates and let interested teachers pilot. Build momentum through early adopters. Formalize once practice is established.
What districts must provide: lesson templates teachers can adapt rather than build from scratch, planning time for grade and subject cohorts to customize lessons, assessment tools that work across contexts, policy frameworks clarifying approved tools and expectations, and just-in-time professional learning through lesson facilitation rather than separate workshops.
This respects what WAC taught us about sustainable change. Programs fail when they rely on brief training sessions or mandate curriculum changes without teacher input. They succeed when they provide systematic infrastructure while respecting teacher expertise in implementation.
Beyond the Three Pathways
DSAIL doesn’t eliminate the three pathways I described in my first article. It coordinates them.
Core AI literacy concepts emerge through disciplinary practice rather than requiring standalone courses. Teachers embed AI interrogation in subject-area work where it strengthens existing standards. Conversational AI tools can be used appropriately when students have developed the critical thinking habits to recognize that AI outputs require verification.
The difference is systematic coordination. Students develop consistent interrogation habits across the contexts where they actually encounter AI. Teachers reinforce complementary messages about verification and perspective-recognition rather than sending contradictory signals. Districts provide coherent infrastructure rather than leaving individual teachers to navigate AI integration alone.
This is what systematic AI literacy looks like: not a single program or curriculum, but an institutional framework that ensures students develop durable critical thinking habits through repeated disciplinary practice. Students learn that AI output is perspectival. They discover when AI is useful and when it misleads. They develop verification habits that transfer across contexts.
What Success Looks Like
Students who experience coordinated DSAIL implementation don’t just know facts about AI. They’ve developed habits:
They habitually interrogate AI outputs before accepting them. They ask what perspective AI is taking, what’s emphasized, what’s overlooked, whether outputs match available evidence. They use AI as a brainstorming tool while maintaining ownership of their thinking. They recognize when AI reflects dominant patterns versus complete pictures. They make ethical decisions about when and how to use AI. They understand that AI output is perspectival, not neutral.
Most importantly, they can transfer these habits across contexts. The critical thinking they practice comparing AI explanations with experimental data in science class applies when they’re evaluating AI-generated historical summaries, verifying AI solutions in math, or recognizing bias in AI-generated social media content they encounter outside school.
Teachers who facilitate DSAIL lessons don’t feel burdened by “one more thing.” They recognize AI interrogation as strengthening the critical thinking they already teach. They’re confident facilitating comparison activities in their disciplines. They identify natural alignment points in existing curriculum. They’re part of a collaborative culture where AI literacy is normalized rather than feared.
Districts implementing DSAIL systematically demonstrate coherent, equitable approaches across schools. Students graduate with demonstrated AI literacy rather than fragmented experiences depending on which teachers they happened to have. Clear policies balance innovation with protection. The district positions itself as a proactive leader rather than reactive follower.
The Choice Districts Face
The alternative to systematic coordination isn’t preserving the status quo. Students are already using AI for homework, consuming AI-generated content across social media, and developing AI interaction habits without institutional guidance. The question isn’t whether districts will address AI literacy. It’s whether they’ll do so systematically or leave it to chance.
Banning tools students use anyway creates enforcement problems and inequity: students with resources access AI regardless while others fall behind. Ignoring AI integration until “better guidance emerges” means another year of contradictory messages and missed opportunities to develop critical thinking habits. Adopting whatever AI tools vendors promote without systematic literacy instruction positions students as consumers rather than critical evaluators.
DSAIL offers a third path: embedded, systematic, sustainable. It doesn’t require impossible conditions: new courses, wholesale teacher retraining, expensive infrastructure. It requires strategic thinking about where AI literacy already strengthens existing instruction, scaffolding to help teachers facilitate that work, and institutional commitment to coordination rather than fragmentation.
The frameworks exists. The lesson templates can be built. The assessment protocols work across contexts. The professional learning model respects teacher expertise. The implementation pathways accommodate different district capacities.
What’s needed now is institutional will: the recognition that systematic AI literacy isn’t optional, the commitment to coordination rather than collision, and the investment in infrastructure that enables teachers to do this work well.
The students in our classrooms right now are developing their AI interaction habits with or without our guidance. The question is whether we’ll provide systematic instruction that helps them develop critical thinking for an AI-integrated world, or whether we’ll leave them to figure it out alone while we debate policies and wait for perfect conditions that will never arrive.
DSAIL provides the framework. Implementation is a choice districts can make today.
Nick Potkalitsky, Ph.D.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy.
Excellent piece, this is a perspective that educators, researchers, and students need to examine for themselves.
While some educators are trying to hold back the tide with a broom, others are encouraging integration but lack any structure, policy, or plan for how to do that.