What Writing Across the Curriculum Teaches Us About Student AI Literacy
Where is the institutional will to supplement increasing AI tool access with thoughtful, systemic approaches to student AI literacy?
Thank you for the thoughtful feedback on my recent posts about student AI literacy. Your responses have been invaluable as I work through these ideas. This is the third post in a four-part series, and my thinking continues to evolve significantly between posts as new perspectives emerge.
For those following along, I have a promise: by the end of this series, I’ll provide concrete, grade-banded K-12 student AI literacy standards that go beyond typical algorithmic thinking approaches. These standards will focus on source interrogation and evaluation skills, designed as a foundation for disciplinary-specific AI literacy applications.
The coordination challenge facing AI literacy isn’t unprecedented. Fifty years ago, educators confronted a remarkably similar problem: students were learning about writing in English classes but struggling to apply those skills across other subjects. Writing was treated as a discrete skill rather than a tool for learning and thinking across disciplines. The Writing Across the Curriculum (WAC) movement that emerged offers both a roadmap and a warning for AI literacy implementation.
The parallels to today’s AI literacy challenge are striking. Just as students once learned “writing” in English class but couldn’t transfer those skills to lab reports in chemistry or argument construction in history, today’s students are receiving fragmented AI literacy instruction that doesn’t connect across contexts. They learn to “be critical of AI” in library orientation but use AI tutoring systems without question in math class. They’re told AI assistance is plagiarism in English but required to use AI brainstorming tools in science.
This disciplinary integration approach isn’t just theoretical. The recent handbook Teaching AI Literacy Across the Curriculum by Lyublinskaya and Du explicitly rejects confining AI literacy to computer science courses, instead embedding core AI concepts into mathematics, science, language arts, and social studies through discipline-specific applications. Rather than teaching generic “AI skills,” they show teachers how to integrate AI-generated data into math problems, use AI ethics debates in genetic engineering discussions, and employ AI-powered chatbots for immigration storytelling.
WAC’s evolution offers both promise and peril for AI literacy implementers. When it worked, WAC transformed entire institutions, creating coherent skill development across disciplines while respecting subject-area expertise. When it failed, it left behind expensive pilot programs, frustrated teachers, and students more confused than before. The difference wasn’t in the theory but in the implementation approach.
WAC succeeded when it created systematic coordination across disciplines without sacrificing subject-area expertise. It failed when it relied on isolated initiatives, temporary funding, or superficial professional development. For AI literacy, these lessons aren’t just historically interesting. They’re predictive. Districts making the same institutional mistakes WAC programs made in the 1980s are likely to get the same disappointing results.
The most successful WAC programs discovered something crucial about educational change: sustainable reform happens when it validates and builds on existing teacher expertise rather than asking teachers to become something they’re not. Instead of expecting every teacher to become a writing expert, effective WAC programs helped teachers recognize how writing already functioned in their disciplines and make it more intentional.
WAC’s breakthrough came from recognizing that effective professional development must begin with teacher expertise, not teacher deficits. The workshop model that transformed institutions didn’t train teachers to become writing specialists. Instead, it invited teachers to use their existing disciplinary knowledge to build student-facing experiences. Math teachers weren’t told how to teach writing. They were given time and facilitation to discover how mathematical communication already functioned in their classrooms and how to make it more intentional.
This co-creation principle proved essential because it transformed teachers from passive recipients of professional development into active designers of curriculum. When teachers built their own discipline-specific approaches, they understood not just the procedures but the principles behind each strategy. They could adapt techniques to their specific students, modify approaches based on what worked, and troubleshoot problems because they grasped the underlying reasoning. Most importantly, they became advocates for the approach rather than resistors to imposed change.
For AI literacy, this means inviting teachers to use their disciplinary expertise to design student-facing experiences rather than training them to implement predetermined curricula. History teachers are perfectly positioned to create activities where students analyze AI-generated historical summaries for bias, accuracy, and missing perspectives, but only when they’re given time to explore how AI intersects with historical thinking. Math teachers can design powerful experiences where students verify and critique AI-generated solutions, but they need space to experiment with how AI reasoning differs from mathematical reasoning.
This approach validates rather than threatens teacher expertise. Instead of suggesting that teachers need to become AI specialists, it positions them as disciplinary experts who are uniquely qualified to help students navigate AI in their specific contexts. The English teacher’s deep understanding of voice and authorship becomes essential for helping students recognize when AI writing lacks authentic perspective. The science teacher’s commitment to evidence-based reasoning becomes crucial for teaching students to verify AI claims against experimental data.
WAC programs failed catastrophically when they ignored this principle. Programs that relied on brief workshops, mandated curriculum changes, or top-down implementation consistently collapsed within a few years. The fatal flaw was treating implementation as a training problem rather than an institutional design challenge. Teachers who were told what to do without understanding why abandoned the practices quickly or implemented them so poorly that they undermined student learning.
The failure pattern was predictable: initial enthusiasm from early adopters, resistance from the majority of faculty, inconsistent implementation across classrooms, and eventual abandonment when administrative attention moved elsewhere. Programs survived only when they had permanent institutional infrastructure, sustained investment in faculty development, and genuine respect for teacher expertise in the design process.
AI literacy faces identical institutional challenges but at compressed speed. Like WAC, it requires coordination across disciplines and sustained investment rather than pilot project funding. Unlike WAC, it doesn’t have decades to evolve gradually. The commercial and policy pressures surrounding AI implementation are forcing districts to make systematic decisions immediately, often without the benefit of long-term institutional design thinking.
The stakes extend beyond individual student confusion. Districts are currently making infrastructure investments, policy decisions, and professional development choices that will shape AI literacy for years. Without systematic coordination that builds on proven approaches to educational change, we risk entrenching the same contradictions and territorial battles that undermined many WAC programs.
The solution requires frameworks that can orchestrate the coordination WAC achieved while addressing AI literacy’s unique challenges. This means moving beyond WAC’s gradual evolution toward approaches designed specifically for AI’s compressed timeline, commercial pressures, and technical complexity. The goal isn’t to replicate WAC but to learn from both its successes and failures in designing systematic approaches that respect disciplinary knowledge while ensuring coherent student skill development.
Nick Potkalitsky, Ph.D.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy.
Nailed it! As an educator of over 40 years, I so agree with your stance…this is not a training issue. It truly is an institutional design. Thanks for lending your voice to the need for change!
I think one of the best ways to bring AI into classrooms is to use it to introduce or incorporate it into lessons and show how it works, instead of focusing on making every assignment AI proof.
Ultimately, I could see a philosophical AI class that covers general overviews, current information, ethics, all of that required. But those ideas need to start earlier too, even if it’s just in small ways. What won’t work is banning it.