The Manifesto Moment: Examining Education's Response to AI
Are AI education manifestos transforming from declarations into something entirely new: living documents that map our uncertainties?
Support journalism that digs deeper. Our detailed investigations of AI learning tools help educators and parents make informed decisions through thorough, independent analysis.
Consider upgrading to a paid subscription to ensure this vital work continues and enables more in-depth coverage of the rapidly evolving AI education landscape.
Curious about institutional responses to AI in education? I explore these themes further in my upcoming AI and You podcast conversation with Peter Scott (Episode 242, Feb 3).
We discuss the challenges of writing about AI while using it, and why education's "manifesto moment" reveals deeper anxieties about technological change. Listen at https://aiandyou.net/e/242-guest-nick-potkalitsky-ai-integration-expert #aiandyou
“The Manifesto Moment: Examining Education's Response to AI”
The Manifesto Impulse
Higher education has a curious response to AI anxiety: writing manifestos. Lots of them.
"Generative artificial intelligence (AI) has stormed higher education," declares Ella McPherson and Matei Candea's recent manifesto, "at a time when we are all still recovering from the tragedies and demands of living and working in a pandemic." The sense of institutional overwhelm is palpable.
These documents follow a telling pattern. They begin with bold declarations - "establishing foundational principles" and "ensuring ethical deployment" - but quickly dissolve into qualification and complexity. Aras Bozkurt and colleagues' Open Praxis manifesto admits it "may not lead to generalizable findings, provide an exhaustive understanding, or reach a fixed conclusion." So much for manifesto certainty.
As Trump's return signals AI acceleration and deregulation, these institutional pronouncements will mostly likely only multiply. Yet the manifesto form itself - traditionally a vehicle for radical clarity - seems to crumble upon contact with AI's complexities. What does this rush to declare positions tell us about our moment? And what gets lost when we pretend to certainty about technology that outpaces our ability to understand it?
The Projects and Their Patterns
The Open Praxis manifesto, led by Aras Bozkurt with forty-six co-authors, reveals this tension most clearly. It begins with traditional manifesto ambition, seeking to "critically examine the unfolding integration of Generative AI." But it quickly turns to metaphor, cataloging how we describe AI: "copilot," "sorcerer's apprentice," "demon," "bullshit generator," "colonizing loudspeaker," "stochastic parrot."
When direct description fails, we reach for comparison. More telling still is how the document acknowledges its own limitations. The authors note that "some of the concepts are intertwined and difficult to separate with sharp boundaries." They admit that "due to the nature of the methodology, positive and negative aspects may inherently contradict each other." This isn't failure - it's honesty about AI's rapidly evolving nature.
McPherson and Candea's manifesto proves especially revealing in its contradictions. While lamenting that GenAI arrived "without significant guidance," it struggles to provide that guidance. Instead, it offers something potentially more valuable: a framework for thinking about what we might lose. The authors worry about the "eureka moments" of scholarship - "the satisfaction of working out an argument through writing it out, the thrill of a sentence that describes the empirical world just so, the nerdy pride of wordplay." Even more striking is their admission that "ethical frameworks are racing to catch up with research practices on new terrains." They advise following "internet researchers: follow your instinct (if it feels wrong, it possibly is) and discuss, deliberate and debate." This retreat to gut feeling and collective discussion speaks volumes.
The Safe AI manifesto, authored by Marc Alier Forment, Francisco Garcia Peñalvo, and colleagues, takes perhaps the most practical approach, offering seven principles for AI deployment.
Yet its most notable feature is structural: it's designed as a living document, openly acknowledging that any guidance offered today might need revision tomorrow. "This manifesto will be updated," they write, "as the community and the technology mature." This admission - that today's certainties might not be tomorrow's - feels remarkably clear-eyed.
Margarida Romero and colleagues' "Human-Centered Education" manifesto attempts to split the difference between principle and practice. It introduces the concept of "hybrid intelligence" and proposes a six-level model for AI engagement in education, from passive consumption to "expansive learning."
Yet even here, complexity dominates. The authors acknowledge how AI simultaneously "broadens access to information" while "exacerbating digital divides," and might "streamline tasks" while generating "additional work through thorough fact-checking." Rather than resolve these tensions, they present them as inherent to our moment. They note that "interventions in one part of the AI ecosystem (e.g. need for learners' privacy) can have consequences in other parts (e.g. uses of facial recognition to identify the learners' engagements)."
The Value of Uncertainty
What emerges isn't a path forward but a map of our uncertainties. Each manifesto reveals the challenge of writing about technology that's actively reshaping how we write. Their value lies not in their declarations but in their documentation of this struggle. Perhaps that's exactly what we need - not confident pronouncements about AI's place in education, but honest wrestling with its complexities.
The manifesto impulse is understandable. As AI regulation retreats and institutional pressures mount, we want certainty. We want clear principles, firm guidelines, solid ground. But these documents suggest a different approach. Instead of racing to certainty, perhaps we should embrace the productive discomfort of this moment. The Safe AI manifesto's "living document" approach points the way: our frameworks must evolve as rapidly as the technology they address.
This might mean reimagining what a manifesto can be. Not a declaration of unchanging principles, but a document that grows with our understanding. Not a solution to uncertainty, but a framework for engaging with it. Most importantly, we need to acknowledge that our relationship with AI - in education and beyond - will be marked by constant evolution and necessary revision.
The manifesto moment reveals something crucial: our institutional responses to AI often say more about our anxieties than our understanding. In trying to write our way to clarity, we've documented our confusion. That's not a failure. It might be the most honest starting point for whatever comes next.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: An cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s Mostly Harmless Ideas: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Riccardo Vocca’s The Intelligent Friend: An intriguing examination of the diverse ways AI is transforming our lives and the world around us.
Jason Gulya’s The AI Edventure: An important exploration of cutting edge innovations in AI-responsive curriculum and pedagogy.
Education is at a crazy inflection point right now. It's facinating to watch through your writing.
You write in Fig 1.1 that the learner creates new content using AI tools. However, in practice, we observe the opposite: hundreds of companies offer AI-created personalized lessons, ignoring that such lessons impose the obsolete pedagogy of explicit learning and memorization.
When the learner creates new content, it is called Self-Personalized lessons, and they allow the learner to select not only the content but also the pedagogy—for example, implicit subconscious training of all language skills simultaneously.
This is a critical difference between the two types of personalized lessons, which is overlooked in the Manifesto.