Thank you to our vibrant community of nearly 10,000 educators who continue to engage in these essential conversations about AI and learning. Your insights, questions, and classroom experiences drive the depth of analysis we strive for each week. We're actively seeking guest writers who want to share their perspectives on AI in education—reach out if you have a story to tell or an argument to make.
If you find this content valuable and shareable, please consider becoming a paid subscriber to support the deep research and nuanced analysis these complex issues deserve.
Almost three years after ChatGPT's public debut, American schools find themselves caught in a familiar pattern: endless preparation cycles that defer the very learning they claim to enable. While districts invest in AI platforms like SchoolAI, Flint, and Brisk, most are still positioning this as a "teacher capacity building" year, keeping student AI spaces locked away until educators achieve some mythical state of readiness.
But consider this: a fourth-grader who first encountered ChatGPT 3.5 is now in seventh grade. How many AI experiences have they accumulated while waiting for their teachers to become "sufficiently literate"? How many interactions with entertainment-focused AI tools that optimize for engagement over critical thinking? How many encounters with efficiency amplifiers that promise to make homework faster rather than learning deeper?
The Readiness Trap
The dominant narrative treats AI literacy as a discrete competency that teachers must master before students can meaningfully engage with AI. This logic creates what we might call the "readiness trap," a perpetual deferral where students are always waiting for the next professional development cycle, the next policy framework, the next pilot program to conclude.
This sequencing assumes teachers can achieve some stable state of preparedness that then transfers cleanly to student instruction. But what does AI mastery actually look like when the technology shifts monthly? When new models emerge with different capabilities, limitations, and rhetorical strategies? When the very nature of text, argument, and authority is being transformed in real time?
The readiness trap reveals a deeper misunderstanding: AI literacy isn't a knowledge domain to be conquered but an ongoing interpretive practice. It's not about achieving comprehensive understanding of AI training processes or settling ethical paradoxes. It's about developing capacities for critical engagement with evolving textual and ideological systems.
Students as Co-Investigators
While teachers attend workshops on prompt engineering, students are already developing intuitions about AI's linguistic patterns, its modes of authority, its intersections with their own meaning-making practices. They're encountering AI-mediated textual environments with a fluidity that often exceeds their instructors'.
This isn't to suggest students are naturally AI literate. Many of their encounters lack critical frameworks for understanding how these systems perform authority or construct arguments. But it does suggest that meaningful AI literacy emerges through shared investigation rather than top-down competency transfer.
Students bring different intuitions about AI's textual performances; teachers bring different analytical frameworks. The generative space lies not in teacher mastery preceding student engagement, but in collaborative inquiry into how AI systems are reshaping discourse, knowledge production, and textual agency.
Pedagogical Principles Over Platform Mastery
Instead of deferring student engagement until teachers achieve impossible readiness standards, schools need to position classrooms as safe action spaces where teachers and students engage critically with AI within disciplinary environments. This requires organizing access points around pedagogical principles rather than platform features.
Critical engagement doesn't hinge on knowing exhaustive details about transformer architectures or mastering the latest prompting techniques. It emerges from a fundamentally human relationship: a desire to know more, an inquiry spirit fueled by disciplinary knowledge and intellectual curiosity.
What would it look like if teachers approached AI literacy as collaborative research? If they positioned themselves not as experts who must achieve mastery before teaching, but as co-investigators exploring how AI is transforming the very nature of text, argument, and authority?
The Cost of Waiting
Every month we defer meaningful student AI literacy initiatives, we cede more ground to platforms that optimize for engagement over critical thinking. Students continue accumulating AI experiences in spaces designed for entertainment and efficiency rather than learning and growth.
Meanwhile, the "don't use it to cheat" message rings increasingly hollow as AI becomes integrated into professional workflows across every knowledge domain. Students need frameworks for understanding when and how AI enhances thinking rather than replaces it. These frameworks can only emerge through guided practice and collaborative inquiry.
A Collaborative Path Forward
AI literacy as perpetual beta requires abandoning the fantasy of sufficient preparation and embracing shared investigation of evolving human-AI discursive entanglements. Students don't need teachers who have "figured out" AI. They need teachers willing to investigate alongside them.
This shift demands courage: the courage to move beyond comfort zones, to embrace uncertainty as pedagogically productive, to model intellectual humility in the face of transformative technologies. It requires seeing AI literacy not as another content area to master but as a lens for examining how knowledge itself is being reconfigured.
The question isn't whether teachers are ready enough. It's whether we're willing to make AI literacy a genuinely collaborative practice, one that honors both student intuitions and teacher expertise while preparing everyone for a future where human-AI collaboration is the norm rather than the exception.
The time for perpetual preparation is over. Our students and our schools can't afford to wait any longer.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy.



I don’t know. It is pretty scary. Going into a classroom not entirely knowing what will happen. Perhaps this is the case for all classroom engagements. Fundamental uncertainty, but our disciplinary knowledge helps us cover that up to some extent. I personally think the notion of a discipline is going thorough geological transformation and reformation right now. What will the university and K-12 look like in 10 years? Our future depend on showing up for this hard work. Otherwise we might inadvertently give the higher power excuses to outmode or automate us.
Students don’t have the luxury of waiting for adults and teachers to be ready to use AI.
At the same time, I think entire curriculums will need to be rebuilt and revamped to meet these needs.
In the meantime, basic AI literacy courses can help bridge that gap.