This analysis perfectly captures a fundamental tension I've been thinking about: the gap between EdTech promise and measurable learning impact. While the PDK Poll data showing declining AI support is concerning, it mirrors a broader pattern where educational technology adoption often outpaces evidence of effectiveness.
Your point about AI tools making "implicit pedagogical decisions" resonates deeply with this piece I read recently (https://1000software.substack.com/p/technology-wont-save-schools) which argues that we consistently overestimate technology's transformative power in education. The author notes how we keep expecting different outcomes from similar patterns of tech adoption without fundamentally changing how we measure learning.
What strikes me about your developmental AI literacy framework is that it addresses the "intentionality" issue you mention. But here's my challenge: How do we move beyond adoption metrics ("X schools use AI tools") to actually evidencing learning improvement? Not just engagement or time-on-task, but genuine cognitive gains?
I'd love to see more discussion about designing AI interventions with built-in learning outcome measurement from day one. Too often we implement first, then scramble to prove impact later. What would it look like to start with the learning science and work backward to the AI application?
Real debate needed: Are we repeating the same mistakes of previous EdTech waves, just with more sophisticated tools?
"The declining support isn't a rejection of technology—it's a call for intentionality."
This line got me thinking about the recent MIT study showing 95% of enterprise AI pilots are failing. Both education and business are discovering the same hard truth: jumping into AI adoption without strategic clarity, defined success metrics, or proper stakeholder education is a recipe for failure. Just as enterprises are learning that throwing AI at problems without understanding capabilities and implementation requirements leads to zero ROI, schools are facing declining public support because they're deploying tools without clear pedagogical objectives or training for educators, parents, and students. The 5% of successful implementations-whether in boardrooms or classrooms aren't the ones with the fanciest technology; they're the ones that started with intentional strategy, comprehensive education, and clear metrics for meaningful impact. When you skip that foundational work, you're essentially asking for the restrictive, fearful response we're seeing across sectors.
"'Instead of asking "Do parents want AI in schools?" we should be asking: "How do we thoughtfully sequence different types of AI interactions to serve our educational goals?'" -- That was a strange turn in a post that purported to be about parents' attitudes toward AI. The practical guidance is thoughtful enough, though I'm curious what you think we should be doing in higher ed, given your proposed developmental sequence.
This!!! “Just as enterprises are learning that throwing AI at problems without understanding capabilities and implementation requirements leads to zero ROI, schools are facing declining public support because they're deploying tools without clear pedagogical objectives or training for educators, parents, and students.”
I really resonate with the claim that students need to understand different relationships they have with AI in different contexts. Really admire your work Nick!
I’ve been thinking about process oriented learning a lot recently after reading your piece where you say that assessments should focus on process and not just the product.
I’m a Software Engineer who builds AI products and I have a Learning Design background. I would like to start a series of posts where I build concrete examples of redesigned assessments, learning experiences, and explorables for the age of AI.
If you have some examples you would like to be built please let me know and I will build them and open-source the code as a repository of intentional learning experiences co-designed with real educators and learning science.
I think the Developmental Sequence is a vital piece and a valuable take away. I think there may be another reason that AI support is declining. Perhaps parents and students already have a sense that much of what students are doing has very little connection to their lives outside of the classroom. Augmenting that with AI simply exacerbates that frustration with another layer of expectations for what they should do. AI can help students go deeper in their thinking and one of the most unique things it introduces is the ability to connect learning objectives to the learner, not just the lesson. I am looking forward to posts (besides some of mine) that talk about how AI provides the way to eliminate holdovers from an industrial-age mentality of school -- things like semester or year long curricula that all kids are expected to absorb, whole group instruction, differentiation of teachers' lessons, etc. AI gives educators the tool we've been missing to take a totally different approach to how we can help each individual learner set appropriate learning goals and meet them -- in the ways that are meaningful and relevant to them individually, and on a time schedule that suits them, not the calendar. "The future's ours . . . if we can free it!"
This analysis perfectly captures a fundamental tension I've been thinking about: the gap between EdTech promise and measurable learning impact. While the PDK Poll data showing declining AI support is concerning, it mirrors a broader pattern where educational technology adoption often outpaces evidence of effectiveness.
Your point about AI tools making "implicit pedagogical decisions" resonates deeply with this piece I read recently (https://1000software.substack.com/p/technology-wont-save-schools) which argues that we consistently overestimate technology's transformative power in education. The author notes how we keep expecting different outcomes from similar patterns of tech adoption without fundamentally changing how we measure learning.
What strikes me about your developmental AI literacy framework is that it addresses the "intentionality" issue you mention. But here's my challenge: How do we move beyond adoption metrics ("X schools use AI tools") to actually evidencing learning improvement? Not just engagement or time-on-task, but genuine cognitive gains?
I'd love to see more discussion about designing AI interventions with built-in learning outcome measurement from day one. Too often we implement first, then scramble to prove impact later. What would it look like to start with the learning science and work backward to the AI application?
Real debate needed: Are we repeating the same mistakes of previous EdTech waves, just with more sophisticated tools?
"The declining support isn't a rejection of technology—it's a call for intentionality."
This line got me thinking about the recent MIT study showing 95% of enterprise AI pilots are failing. Both education and business are discovering the same hard truth: jumping into AI adoption without strategic clarity, defined success metrics, or proper stakeholder education is a recipe for failure. Just as enterprises are learning that throwing AI at problems without understanding capabilities and implementation requirements leads to zero ROI, schools are facing declining public support because they're deploying tools without clear pedagogical objectives or training for educators, parents, and students. The 5% of successful implementations-whether in boardrooms or classrooms aren't the ones with the fanciest technology; they're the ones that started with intentional strategy, comprehensive education, and clear metrics for meaningful impact. When you skip that foundational work, you're essentially asking for the restrictive, fearful response we're seeing across sectors.
"'Instead of asking "Do parents want AI in schools?" we should be asking: "How do we thoughtfully sequence different types of AI interactions to serve our educational goals?'" -- That was a strange turn in a post that purported to be about parents' attitudes toward AI. The practical guidance is thoughtful enough, though I'm curious what you think we should be doing in higher ed, given your proposed developmental sequence.
This!!! “Just as enterprises are learning that throwing AI at problems without understanding capabilities and implementation requirements leads to zero ROI, schools are facing declining public support because they're deploying tools without clear pedagogical objectives or training for educators, parents, and students.”
I really resonate with the claim that students need to understand different relationships they have with AI in different contexts. Really admire your work Nick!
I’ve been thinking about process oriented learning a lot recently after reading your piece where you say that assessments should focus on process and not just the product.
I’m a Software Engineer who builds AI products and I have a Learning Design background. I would like to start a series of posts where I build concrete examples of redesigned assessments, learning experiences, and explorables for the age of AI.
If you have some examples you would like to be built please let me know and I will build them and open-source the code as a repository of intentional learning experiences co-designed with real educators and learning science.
And more importantly, we need ways to assess that using AI for these purposes is, in fact, supporting student learning, if not enhancing it.
I think the Developmental Sequence is a vital piece and a valuable take away. I think there may be another reason that AI support is declining. Perhaps parents and students already have a sense that much of what students are doing has very little connection to their lives outside of the classroom. Augmenting that with AI simply exacerbates that frustration with another layer of expectations for what they should do. AI can help students go deeper in their thinking and one of the most unique things it introduces is the ability to connect learning objectives to the learner, not just the lesson. I am looking forward to posts (besides some of mine) that talk about how AI provides the way to eliminate holdovers from an industrial-age mentality of school -- things like semester or year long curricula that all kids are expected to absorb, whole group instruction, differentiation of teachers' lessons, etc. AI gives educators the tool we've been missing to take a totally different approach to how we can help each individual learner set appropriate learning goals and meet them -- in the ways that are meaningful and relevant to them individually, and on a time schedule that suits them, not the calendar. "The future's ours . . . if we can free it!"