I hear you. What I might add is a bit of my personal pedagogical stance: the working nature of any tool, transformation, strategy, or [insert any noun here] lies somewhere in the middle of two extreme points. What I mean by this is that an integration of generative AI that works for one institutional context may not necessarily work well for another. An example: Anna Mills, a teacher and scholar whose AI work I read and study, was an early advocate for generative AI integration in writing courses. What she found through systematic action research is that her students at her institution needed more formalized support with AI literacy. So, she pivoted towards using AI detection tools in an effort to help her students gain that literacy. My students, matriculating at a large (49K) public R2 institution, don't always need that same degree or specificity of structured support. What works for them, and for adult learners outside of traditional academia, might be something like the prompt-first model. Or, it could be something else. The adaptability and mobility of generative AI seem to support many options for integration. Does that make sense? Thanks for chatting!
The AI debates strike me as fundamentally going about this thing the wrong way. A century ago if people asked "will the automobile revolutionize transportation" the answer would have been a strong affirmative. But to conclude that "walking and running is now rendered superfluous" or "cars are already in your gym class" would have been nonsensical. Much the same can be said about ai in education. If we want people to learn how to read and write and think, the way we do that in 2025 isn't that different no only from how we did it in 2020 but even from 1920 or 20.* Not much has changed in the substance of literacy pedagogy,** and whether we use wax tablets, blackboards or digital boards is really a negligible question. How we learn algebra or geometry has likewise changed very little in the past few centuries (things like graphic calculator or whatever is used nowadays are, at best, a "nice to have" while even most university-level math remains something you are perfectly capable of teaching with nothing more than a chalk board). In the same way, AI will make certain tasks more convenient, will have some unintended consequences, and will leave some people out of a job (while hopefully providing some new jobs as well). There are many interesting issues to discuss, but its effects on educaton are probably the least of them.
* How much literacy is important for society and one's economic prosepcts *has* changed dramatically in the past centuries and millenia, by contrast! in the same way, it's possible ai will make certain skills ore or less valuable than they used to be, but it does *not* follow that it would change how we teach those skills to the extent we still care about teaching them!
**to the extent that genuine pedagogical innovations were introduced they had little to do with technology and their contribution was, at best, debatable...
Great article! What stuck with me is “AI won’t reduce cognitive thinking but will redirect it”. When we moved from typewriters to computer keyboards to now voice commands- we didn’t lose dexterity!
Second- Prompting techniques are key! Using our cognitive abilities to create a good prompt - one that is very thoughtful and loaded with information, follows the frameworks such as CARE, BAB etc. is so so important! That’s why I use gudprompt.com
The whole debate is also eerily similar to the silly chatter a couple of decades ago (still heard sometimes to this day) about how the internet will revolutionize pedagogy because memorization is rendered superfluous in the age of google. Something that may sound reasonable provided you don't bother thinking about it for more than a few seconds or know/remember anything about the pre-internet days. If you do , you'd realize that before google we still had encyclopedias, which, while not quite as convenient, ubiquitous, or comprehensive, would still provide you with sufficiently accessible and adequate reference for most topics and purposes. The point is that - contrary to the implicit or explicit premise of the debate - in fact we never forced our kids to memorize information because it wasn't easily available. We forced them to do so because it is a crucial component of learning and you can't actually have "skills" without "knowledge". In other words, the whole terms of the debate were based on one giant strawman regarding why education is the way it is. We see much the same now with AI, alongside the misguided hype and equally hyperbolic optimism and pessimism about the effects of the new technology. I do wish we could get a more sober minded response, but maybe that won't generate enough clicks and subscriptions?
Sounds like you are working on your own treatise here. Sounds like you are also working through a lot of frustrations here. Change is definitely hard. I don't think Dr. Law is asking for wholesale erasure or redefinition of existing pedagogy. (And I know I am not doing so either.) Her project focuses on the very real situation that students are using these tools to assist with their writing process---and how we can steer prompting cycles toward long-standing literacy outcomes and objectives. If that isn't a sober-minded approach, I don't know what is.
Absolutely I’m venting frustrations here. My treatesies appear elsewhere and under my real name ;) I am however trying to make a serious point. It’s worth stopping and thinking how to find the middle course between futile Luddism and overindulgence that harms the students: students in every age of history naturally sought the easiest path. Learning however is hard and part of what school is about is compelling students to make an effort (though spelling this out isn’t popular nowadays). In the case at hand this means forcing students to read and write themselves before they take shortcuts. This is extremely easy to achieve - just have tech free classes. I’m doing that with my college students every day. This isn’t because the tech is bad but because we need them to know how to read write and think for themselves so that in future they are taking shortcuts by choice and knowing what they’re doing rather than being reliant on someone or something else doing the thinking for them.
Tl;dr I don’t buy the idea that you can teach students how to use ai effectively for “critical thinking” without teaching them to think independently first. Ditto for reading writing etc
very interesting post. thank you for writing this!
I like the RPM method but AI tools are made as frictionless as possible, so i am less inclined to believe that people will stick with RPM when they are bottle necked by time, energy, attention.
regarding the following, i am curious why you say these new higher order tasks are not inherently bad? as a knowledge worker i find these new tasks less meaningful and more alienating from my work that i enjoy and puts me into flow states.
"How AI Shifts Cognitive Effort:
🧠 From gathering information → To verifying information
🧠 From problem-solving → To integrating AI responses effectively
🧠 From task execution → To overseeing and refining AI-assisted outputs
None of these shifts are inherently bad. They just require a different approach to thinking—one that many traditional models of education haven’t caught up with yet."
I hear you. What I might add is a bit of my personal pedagogical stance: the working nature of any tool, transformation, strategy, or [insert any noun here] lies somewhere in the middle of two extreme points. What I mean by this is that an integration of generative AI that works for one institutional context may not necessarily work well for another. An example: Anna Mills, a teacher and scholar whose AI work I read and study, was an early advocate for generative AI integration in writing courses. What she found through systematic action research is that her students at her institution needed more formalized support with AI literacy. So, she pivoted towards using AI detection tools in an effort to help her students gain that literacy. My students, matriculating at a large (49K) public R2 institution, don't always need that same degree or specificity of structured support. What works for them, and for adult learners outside of traditional academia, might be something like the prompt-first model. Or, it could be something else. The adaptability and mobility of generative AI seem to support many options for integration. Does that make sense? Thanks for chatting!
I appreciate this response. I too follow Anna Mills closely.
This is a great contribution to the conversation in AI and learning. My minor contribution aligns: https://substack.com/@drmountain/note/c-89624068?r=1n866u&utm_medium=ios&utm_source=notes-share-action
The AI debates strike me as fundamentally going about this thing the wrong way. A century ago if people asked "will the automobile revolutionize transportation" the answer would have been a strong affirmative. But to conclude that "walking and running is now rendered superfluous" or "cars are already in your gym class" would have been nonsensical. Much the same can be said about ai in education. If we want people to learn how to read and write and think, the way we do that in 2025 isn't that different no only from how we did it in 2020 but even from 1920 or 20.* Not much has changed in the substance of literacy pedagogy,** and whether we use wax tablets, blackboards or digital boards is really a negligible question. How we learn algebra or geometry has likewise changed very little in the past few centuries (things like graphic calculator or whatever is used nowadays are, at best, a "nice to have" while even most university-level math remains something you are perfectly capable of teaching with nothing more than a chalk board). In the same way, AI will make certain tasks more convenient, will have some unintended consequences, and will leave some people out of a job (while hopefully providing some new jobs as well). There are many interesting issues to discuss, but its effects on educaton are probably the least of them.
* How much literacy is important for society and one's economic prosepcts *has* changed dramatically in the past centuries and millenia, by contrast! in the same way, it's possible ai will make certain skills ore or less valuable than they used to be, but it does *not* follow that it would change how we teach those skills to the extent we still care about teaching them!
**to the extent that genuine pedagogical innovations were introduced they had little to do with technology and their contribution was, at best, debatable...
Excellent perspective. Strategy and augmenting thinking are ways forward. What is the point of effort when it is not strategised?
Thanks, Joseph. Dr. Law has done a really nice job here. The feedback on this method is very promising.
Great article! What stuck with me is “AI won’t reduce cognitive thinking but will redirect it”. When we moved from typewriters to computer keyboards to now voice commands- we didn’t lose dexterity!
Second- Prompting techniques are key! Using our cognitive abilities to create a good prompt - one that is very thoughtful and loaded with information, follows the frameworks such as CARE, BAB etc. is so so important! That’s why I use gudprompt.com
P.S.
The whole debate is also eerily similar to the silly chatter a couple of decades ago (still heard sometimes to this day) about how the internet will revolutionize pedagogy because memorization is rendered superfluous in the age of google. Something that may sound reasonable provided you don't bother thinking about it for more than a few seconds or know/remember anything about the pre-internet days. If you do , you'd realize that before google we still had encyclopedias, which, while not quite as convenient, ubiquitous, or comprehensive, would still provide you with sufficiently accessible and adequate reference for most topics and purposes. The point is that - contrary to the implicit or explicit premise of the debate - in fact we never forced our kids to memorize information because it wasn't easily available. We forced them to do so because it is a crucial component of learning and you can't actually have "skills" without "knowledge". In other words, the whole terms of the debate were based on one giant strawman regarding why education is the way it is. We see much the same now with AI, alongside the misguided hype and equally hyperbolic optimism and pessimism about the effects of the new technology. I do wish we could get a more sober minded response, but maybe that won't generate enough clicks and subscriptions?
Sounds like you are working on your own treatise here. Sounds like you are also working through a lot of frustrations here. Change is definitely hard. I don't think Dr. Law is asking for wholesale erasure or redefinition of existing pedagogy. (And I know I am not doing so either.) Her project focuses on the very real situation that students are using these tools to assist with their writing process---and how we can steer prompting cycles toward long-standing literacy outcomes and objectives. If that isn't a sober-minded approach, I don't know what is.
Absolutely I’m venting frustrations here. My treatesies appear elsewhere and under my real name ;) I am however trying to make a serious point. It’s worth stopping and thinking how to find the middle course between futile Luddism and overindulgence that harms the students: students in every age of history naturally sought the easiest path. Learning however is hard and part of what school is about is compelling students to make an effort (though spelling this out isn’t popular nowadays). In the case at hand this means forcing students to read and write themselves before they take shortcuts. This is extremely easy to achieve - just have tech free classes. I’m doing that with my college students every day. This isn’t because the tech is bad but because we need them to know how to read write and think for themselves so that in future they are taking shortcuts by choice and knowing what they’re doing rather than being reliant on someone or something else doing the thinking for them.
Tl;dr I don’t buy the idea that you can teach students how to use ai effectively for “critical thinking” without teaching them to think independently first. Ditto for reading writing etc
very interesting post. thank you for writing this!
I like the RPM method but AI tools are made as frictionless as possible, so i am less inclined to believe that people will stick with RPM when they are bottle necked by time, energy, attention.
regarding the following, i am curious why you say these new higher order tasks are not inherently bad? as a knowledge worker i find these new tasks less meaningful and more alienating from my work that i enjoy and puts me into flow states.
"How AI Shifts Cognitive Effort:
🧠 From gathering information → To verifying information
🧠 From problem-solving → To integrating AI responses effectively
🧠 From task execution → To overseeing and refining AI-assisted outputs
None of these shifts are inherently bad. They just require a different approach to thinking—one that many traditional models of education haven’t caught up with yet."
This is superb.