6 Comments
User's avatar
Iain M Coggins's avatar

I really appreciate your addressing the issue of mistrust of students. This is not an aspect of AI integration that I run into very often. How can we base the education of children and youth on our distrust of them? Further, your focus on shutting out students with learning differences cogently ties up this piece. AI presents the opportunity for genuine multimodal learning. Why would we instead common, opt for monomodality?

Mrs Dan's avatar

Absolutely spot on analysis as I teach the use of AI to help my neurodivergent students to navigate and survive in this world.

They use it to make a resume, to extract meaning from extra long texts they read on the net, even create a personalized schedule of work following their chronotype.

I am very grateful to see that tool is helping them gain more autonomy in a fast changing and also « cruel » world to kids with learning disabilities.

Thank you so much for saying aloud what I am thinking right now, it’s important to help educators see how valuable AI can be in a very mindful and relevant way.

Linda Harasim's avatar

Your emphasis on the medium (paper v AI) doesn’t get into the implications or so what of either. The impact of cursive v typing is being addressed by some researchers with interesting results. For me, what most needs to be addressed is how humans learn to think and how we learn. Pedagogies which build on these process skills are key to educating the next generation but are little considered. These are the deeper issues we need to explore and facilitate. Thx for your thoughts.

derdide's avatar

Thanks for these insights. I agree with you that "removing tech" is not an answer. Where I, not disagree, but diverge from your thoughts, is that I am more and more convinced that the future lies in a form of a deeply personalized AI. I haven't deeply thought about its implication on education and kids (or at least teens), hence the starting sentence

Terry Underwood, PhD's avatar

Interesting. I see Nick's point a bit differently. This post seems to be speaking to a bigger problem: It's almost impossible to deeply personalize AI across millions of learners, for one thing. For another, the risks of deeply personalized machines without very strong assurances of privacy could be really dangerous. Deride, you make a great point for not having thought deeply about the issue. Welcome to the conversation. Nick's Substack is a great place to be.

derdide's avatar

It is impossible now (I totally agree). But assuming it will remain impossible seems a far stretch. And the privacy implications of having it happening are massive (you're totally right to point that out), but it is already work in progress - just look at the "memory" feature of ChatGPT, or the increasing integration of Gemini in Copilot, to name a few. And once there, it will be too late to implement "privacy" - I put quotation marks as it goes way beyond privacy, it touches upon thought processes and persuasion, orders of magnitude beyond what we see today.