The AI Reality Check: What the 2025 PDK Poll Reveals About Parent Attitudes
How do we build AI solutions that respond to parent and student perceptions and preferences?
Want more analysis like this? Paid subscribers help sustain the research and writing that cuts through AI hype to examine what's actually happening in schools.
When the Longest-Running Education Poll Delivers Bad News
Since 1969, the PDK Poll of the Public's Attitudes Toward the Public Schools has been asking Americans what they really think about education. Through decades of reform movements, technology promises, and policy shifts, it's provided a steady baseline of public opinion.
This year's results on AI in education should make every ed-tech executive pause: Support is declining across every single AI use case measured.
The Numbers Tell a Clear Story:
AI lesson planning: 62% → 49% support (-13 points)
AI tutoring: 65% → 60% support (-5 points)
AI test prep: 64% → 54% support (-10 points)
AI homework help: 43% → 38% support (57% now oppose)
This isn't a measurement error or a single bad year. This happened during 2024-2025, when AI education tools became more sophisticated, more widely available, and heavily marketed to schools. As the technology improved, public support moved in the opposite direction.
The Platform Paradox That Nobody's Talking About
Here's what makes these findings particularly puzzling: the major AI education platforms have specifically designed around privacy concerns.
SchoolAI and MagicSchool—the tools being adopted by thousands of schools—are FERPA-approved and don't require student logins. They work entirely through teacher accounts, keeping student data out of the equation. These aren't the dystopian AI surveillance systems that privacy advocates warn about.
Yet even these carefully designed, teacher-mediated tools are losing public support. Meanwhile, platforms like Khanmigo that directly interact with students face the full weight of the 68% of parents who oppose AI accessing student data.
The disconnect suggests this isn't really about privacy mechanics—it's about something deeper.
The Hidden Pedagogical Problem
I think the declining support reveals an intuitive understanding that many educators are just beginning to articulate: AI tools aren't neutral time-savers. They're making implicit pedagogical decisions.
When SchoolAI generates a lesson plan, it's not just automating administrative work. It's making choices about learning objectives, pacing, assessment strategies, and student engagement. When MagicSchool creates an assignment, it's embedding assumptions about what constitutes meaningful work.
Parents might not be able to articulate this, but they sense that we're letting tools drive educational decisions rather than starting with clear pedagogical purposes.
What We Actually Need: Intentional AI Literacy
Instead of asking "Do parents want AI in schools?" we should be asking: "How do we thoughtfully sequence different types of AI interactions to serve our educational goals?"
The key insight is that not all AI interactions are the same. Students need to learn to differentiate between different AI relationship types:
AI as Research Assistant: Teaching source evaluation and critical thinking—but with heavy scaffolding in elementary years
AI as Writing Mentor: Developing metacognitive awareness of writing processes—introduced in middle school when students can handle process feedback
AI as Practice Partner: Building skills through low-stakes repetition—useful across all ages but with age-appropriate complexity
AI as Thinking Partner: Advanced intellectual dialogue and assumption-challenging—requires cognitive maturity, primarily high school and beyond
AI as Creation Collaborator: Human-AI collaborative projects—the most sophisticated relationship, requiring students to maintain agency while leveraging AI capabilities
The critical work is teaching students to recognize which AI space they're entering and why. Using AI as a writing mentor requires completely different skills and awareness than using it as a research assistant.
The Developmental Sequence Nobody's Teaching
Right now, most schools are adopting AI tools based on what's available and what teachers request. But we need intentional developmental progressions:
Elementary: Heavily scaffolded AI as research assistant—focus on understanding AI limitations and the need for human judgment
Middle School: Add AI practice partner and writing mentor—focus on using AI for skill building while maintaining ownership of learning
High School: Add AI thinking partner and creation collaborator—focus on strategic AI use for intellectual growth while preserving authentic student voice
The Bottom Line
The PDK Poll data suggests that families intuitively understand something that the ed-tech industry is missing: they want a coherent educational vision that thoughtfully incorporates AI, not ad hoc tool adoption driven by what's technically possible.
The declining support isn't a rejection of technology—it's a call for intentionality.
The questions we should be asking:
Which AI interaction space serves this specific learning objective?
How do we ensure AI enhances rather than replaces the cognitive work we want students to do?
What does developmentally appropriate AI use actually look like in practice?
How do we teach students to be empowered AI users rather than passive consumers?
The future of AI in education won't be determined by venture capital funding or technical breakthroughs. It'll be determined by whether we can align these powerful tools with our deepest pedagogical purposes and our understanding of how children actually learn and grow.
The PDK Poll has given us a wake-up call. The question is whether we're ready to listen.
Nick Potkalitsky, Ph.D.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy.



This analysis perfectly captures a fundamental tension I've been thinking about: the gap between EdTech promise and measurable learning impact. While the PDK Poll data showing declining AI support is concerning, it mirrors a broader pattern where educational technology adoption often outpaces evidence of effectiveness.
Your point about AI tools making "implicit pedagogical decisions" resonates deeply with this piece I read recently (https://1000software.substack.com/p/technology-wont-save-schools) which argues that we consistently overestimate technology's transformative power in education. The author notes how we keep expecting different outcomes from similar patterns of tech adoption without fundamentally changing how we measure learning.
What strikes me about your developmental AI literacy framework is that it addresses the "intentionality" issue you mention. But here's my challenge: How do we move beyond adoption metrics ("X schools use AI tools") to actually evidencing learning improvement? Not just engagement or time-on-task, but genuine cognitive gains?
I'd love to see more discussion about designing AI interventions with built-in learning outcome measurement from day one. Too often we implement first, then scramble to prove impact later. What would it look like to start with the learning science and work backward to the AI application?
Real debate needed: Are we repeating the same mistakes of previous EdTech waves, just with more sophisticated tools?
"The declining support isn't a rejection of technology—it's a call for intentionality."
This line got me thinking about the recent MIT study showing 95% of enterprise AI pilots are failing. Both education and business are discovering the same hard truth: jumping into AI adoption without strategic clarity, defined success metrics, or proper stakeholder education is a recipe for failure. Just as enterprises are learning that throwing AI at problems without understanding capabilities and implementation requirements leads to zero ROI, schools are facing declining public support because they're deploying tools without clear pedagogical objectives or training for educators, parents, and students. The 5% of successful implementations-whether in boardrooms or classrooms aren't the ones with the fanciest technology; they're the ones that started with intentional strategy, comprehensive education, and clear metrics for meaningful impact. When you skip that foundational work, you're essentially asking for the restrictive, fearful response we're seeing across sectors.