Meeting Teachers in the Trenches: Infrastructure-Informed AI Literacy
Moving beyond de-contextualized approaches to AI literacy
Join our nearly 9,000 subscribers by clicking below and becoming a subscriber. Consider becoming a paid subscriber to support the work of Educating AI.
Let me start with a story from my work cycle this week—because it perfectly illustrates the gap between AI companies' educational rhetoric and the actual experience of educators trying to implement these tools.
At my new workplace, I'm currently setting up my AI stack, which I hope will include a good AI slide generator and another frontier model besides Gemini. My organization has tax-exemption status, so I need to secure accounts that don't charge tax. As these slide applications and frontier models increasingly position themselves as partners with educational and non-profit work, one would imagine that an easy tax-exempt process would be in place. This is very much not the case.
The slide generator company offers the easiest process: send a copy of the tax certificate for review and receive a response within 24 hours. I'm still waiting. The frontier model companies—Anthropic and OpenAI—offer less clear pathways. For both, you cannot reach anyone in person. Only chatbots. Anthropic's site says it does not offer tax exemption, while simultaneously stating that Teams account holders can appeal for exemption status. OpenAI claims you can apply through their Help Center chatbot, but when I tried, the chatbot repeatedly insisted it wasn't the right place to upload certificates—despite explicit instructions saying otherwise. I had to train their Help Center chatbot on their own website's directions before it would accept my documentation.
After several work hours navigating these labyrinthine processes, I still have no tax exemption for the tools I need. If I—someone whose job involves navigating AI systems—struggle with this, imagine the barriers facing a classroom teacher or district administrator trying to responsibly integrate these tools while managing budgets, compliance requirements, and pedagogical goals.
The Infrastructure Reality
As I begin my new role as an AI Specialist, I find myself thinking increasingly about infrastructure and how it shapes AI literacy initiatives. I've developed a growing unease about the AI literacy space—that its approach to tool agnosticism has unwittingly fostered a decontextualized approach to instruction that may not serve teachers well in the coming school year.
By tool agnosticism, I don't mean the reasonable practice of avoiding vendor endorsements. Rather, I'm referring to the creation of AI literacy content that operates as if the specific tools teachers actually have access to—with their particular affordances, limitations, policy constraints, and infrastructural complexities—don't matter for how we approach instruction. This abstracted approach, while theoretically elegant, ignores the material realities that fundamentally shape how AI gets implemented in educational settings.
Here I'm talking specifically about how AI infrastructure shapes particular kinds of interactions with AI in school settings, and on another level—opens up and forecloses particular educational experiences and opportunities.
This has been a summer of major infrastructure announcements. OpenAI, Anthropic, and Google are actively courting college students, positioning their platforms as the preferred free tools for academic work. Each offers expansive access to their ecosystems. These developments—operating beyond the curated technological environments established by institutions—are reshaping higher education's instructional and pedagogical landscapes. Every professor now must assume students approach assignments with sophisticated AI readily available.
Meanwhile, Google will embed Gemini into Workspace by default. Unless administrators disable this feature, we'll see widespread AI distribution to K-12 students, creating additional shifts across instructional and pedagogical landscapes. I also learned Google is adding substantial upcharges—enterprise licenses increasing 30-40% on average. This will strain school budgets and complicate decisions about AI add-ons like SchoolAI or Magic School, especially if Google rapidly enhances Gemini for Education with comparable features.
Moving Beyond Decontextualized Approaches
Although many teachers still lack formal AI training, recent evidence shows that an increasing number have experimented with AI personally and increasingly in their professional work. Continuing to offer the same tool-agnostic, decontextualized modules on "prompting best practices" and "AI as thought partner" risks further disconnect from teachers' lived experiences.
While decontextualized AI literacy retains value—pushing us toward deeper questions about AI's educational purpose, the tensions between conversational and algorithmic uses—instruction must increasingly account for infrastructural realities to meet teachers where they are: attempting to craft coherent responses to AI disruption within often unstable, loosely connected networks of technologically constrained tools.
We should, for instance, ground our approaches to AI ethics and safety in the actual tools teachers access, the specific policies governing information flows across school networks, the material constraints shaping their daily practice.
Addressing the Technocratic Critique
When I shared these ideas on LinkedIn, several respondents characterized this perspective as technocratic. I respect this critique. Infrastructure represents just one dimension of a complex conversation. By advocating for infrastructure's inclusion in AI literacy discussions, I aim to address a significant gap while preventing technology from becoming the primary determinant of educational AI use.
Some worry that once we begin thinking about pedagogical responses in dialogue with infrastructural realities, we've capitulated to corporations and their problematic tools. I held similar views during graduate school. But my current work with teachers has pushed me toward more networked, pragmatic, and less deterministic thinking about tools and pedagogy.
OpenAI's ChatGPT launch—largely unwelcome among educators—established a future where AI inevitably becomes part of our pedagogical responses. Yet teachers' and administrators' work over three years—empirical, ground-level efforts in their radical multiplicity—demonstrates humans engaging these tools synergistically in countless ways. Infrastructure disparities and differences partially explain this diversity.
The Path Forward
This bureaucratic nightmare I described earlier exemplifies why infrastructure-informed AI literacy matters. It's not about surrendering to corporate interests or adopting technocratic approaches. It's about meeting teachers where they work, acknowledging the material constraints and affordances within which they operate, and developing literacy frameworks that account for the messy, networked, often frustrating realities of educational AI implementation.
The question isn't whether to engage with AI infrastructure—that decision has been made for us. The question is whether our literacy initiatives will help teachers navigate these realities thoughtfully and purposefully, or whether we'll continue offering decontextualized frameworks that sound compelling in theory but fragment in practice.
Nick Potkalitsky, Ph.D.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy.




This is a very important point and I would push even deeper. In schools where some students and their families can afford paid or even enterprise models, they sit at a different place at the table from not only their peers, but their teachers as well. Many teachers are rightly nervous about their students using - or even using themselves - tools that have not been explicitly authorized by their schools. Directors of Educational Technology, which sounds maybe similar to the position you are in, are now responsible for much of the downstream consequences and their decisions, whether with or without teacher input, buy-in, or even knowledge of what's happening. Some schools just want a paid AI wrapper (School.AI, MagicSchool, Flint, etc...) in order to point everyone towards a school "approved" tool so as not to deal with budget requests for individual accounts or departmental ones. Add to that the point you make about Google and others upgrading existing platforms with powerful AI, and you have the nightmare scenario you're describing. It's a mess.
All of this resonates with me so much—and I think that resonance is a bit of an indictment of much of the other AI writing I have read this summer, as too often it feels divorced from the day-to-day realities and constraints of educators.
Thanks for naming this and please keep pushing for a more-pragmatic conversation that meets teachers where they need to be met and, in doing so, empowers them to be a more substantive part of the conversation as a result. (Which is needed!)