12 Comments
User's avatar
Guy Wilson's avatar

I've been thinking about this post a lot the last couple of days. During part of that time, I was working on a presentation on AI and also trying to write out some of the confused muddle of feelings I have on the subject. (The latter were triggered by, but only partly about, David Runciman's 2023 book, The Handover: How We Gave Control of Our Lives to Corporations, States and AIs, which I highly recommend.) I don't know that I disagree with you fundamentally, but there are some ways that we are approaching AI in education that bother me these days. None of this is meant as an attack. I really enjoyed the essay.

One is that we are over-emphasizing writing. I realize that being a writer is at the core of the identity of most people who are writing about this, so they are going to focus on that. It is a feedback loop, and I worry that it is trapping us in too much consideration of AI. Of course we are talking about Large Language Models, but they do much more than just write. I know the programming people also write a lot about AI in education, but I don't see nearly as much about other disciplines than programming, data science, and writing in higher education. Maybe I am not looking hard enough, but I am concerned.

A second thing is around a point that many are making, but that you put very well: "Students will be required to maintain a detailed log of research insights and experiences, providing instructors with valuable information about the choices they made regarding the technology they collaborated with throughout the journey towards a particular outcome." There is a part of me that agrees with this, but a part that is perplexed and a little cynical. A good chunk of my job for the past several years has been to administer and support our university system's Turnitin instance and our online proctoring software. For years I have seen and heard the opposition of parts of the higher education community to both kinds of products. One of the criticisms is that they constitute some kind of unwarranted form of surveillance of students (especially the proctoring software). I would actually love to get rid of them. That is not happening anytime soon.

What strikes me about the proposal that students log everything they do is that we are asking them to surveil and report on themselves, internalizing the need for surveillance ever more deeply. I know this is not what is intended, but it seems an inevitable byproduct. Of course those who stray outside the lines will still log things the way they think they need to be reported, so we will not really be getting out of cheating mentality. Between this and the fact that reviewing all of this material is going to take a lot of time while overworked adjuncts multiply, we will almost inevitably see software solutions where the students are required to log all they do, but which also log their actions on the computer, then analyze that and report back. We are already starting to see that a little. Under the current economic conditions of universities, I think this is likely to spread and create a whole new surveillance regime.

I have a couple of broader concerns about AI that I think affect education in ways that we are not anticipating.

One is that AIs currently, and for the foreseeable future, need us to behave in consistent ways. They need humans who are statistically predictable. As AI is applied to education, will we teach students to be too statistically predictable. That isn't very well put, I'm struggling with the concept, but it concerns me.

The other is that we take the inevitability of AIs for granted. It may be that their rise is inevitable, but I've been spending some time looking at different factors that might slow their spread, stagnate, or face widespread rejection and hostility from society. When it comes to history, everything is contingent, but most of us do not remember that. At the very least we need to start considering what might happen under various contingencies. Suppose there are mass student protests against AI as a surveillance technology, as a technology that needs to be decolonized, over environmental/climate issues, or because of its negative impact on jobs and careers. Suppose the 2024 election is so contaminated by deepfakes that there is a mass revulsion against AI? Suppose that AI becomes so weaponized by the various world powers that it must be tightly regulated and surveilled? Those are just a few. What do those do to education? What do they do to politics? To other aspects of society?

We are restructuring education to benefit AI. What happens to education if any of those scenarios (or many others) come to pass? What would it look like if we restructured AI to benefit education?

Expand full comment
Nick Potkalitsky's avatar

As always, I greatly enjoy and value your thoughtful comments and insights on this topic.

Please consider my piece as more of a descriptive futurology rather than necessarily advocating for this particular future. I believe the merits and implications are worth debating further in the comments, especially as we see these AI practices already being realized in many classrooms. As you've likely observed, one common approach these days is requiring students to keep a log of their AI use. Practitioners like Alan Knowles, whom I interviewed previously, view this methodology as fundamentally misguided. Personally, I think usage logs could potentially have a place in grades 7-12, but by college, such logs would need a very strong, well-structured justification and rationale to be worthwhile.

The more I engage with AI in its current state, the less convinced I am of its genuine usefulness for meaningful learning tasks in the short term. While AI excels at automating certain low-level writing tasks, it also implicitly promotes a narrow conception of what writing is (the AI ideal), which can be difficult to de-program. Hopefully, beneath my optimistic veneer, you've sensed a healthy dose of skepticism. The driving force behind my project is the reality that AI is now accessible to students, so the question becomes, "What do we do about it?"

As such, I am currently focusing on identifying limitations and formulating a concrete action plan for the upcoming school year. How can I continue to cultivate critical thinking, creativity, and strong compositional skills in my students when so much is being automated? How can I help my students recognize and appreciate the amazing beauty and potential of their own minds?

To me, AI serves as a mirror that illuminates which aspects of the educational system need amplifying and which need altering. In that sense, I find it useful. I am hopeful that other beneficial uses will emerge over time, but in the meanwhile, I continue to teach my students to write in much the same way I have for a long time. I trust my students to follow the expectations of my classroom, and when I hear other voices in their writing, I have conversations with individual students. I always frame such experiences as learning opportunities, never crushing my students' spirits or futures over a single misstep. At the same time, I show my students how to use AI for brainstorming, developing essential questions, selecting sources, and engaging in very structured editing. I generate sample essays using AI to demonstrate its limitations in voice, explanatory power, logic, and referencing. I believe I am doing a good job staying ahead of the curve right now.

That said, I know I am well ahead of most of my colleagues. Those who lack AI literacy are getting left behind, turning to hyper-restrictive and fear-driven approaches, and making students' lives much more difficult. Ultimately, my mission is focused on the students: how can I help other teachers so that I can help students have a less restrictive space in which to grow, learn, and explore? Thank you for giving me the chance to reflect on these important issues. As always, you are a great thought partner.

Expand full comment
Guy Wilson's avatar

Nick, I did not take it that you were advocating those positions. I apologize if you did. Like you I am playing with possible futures right now, though I am not as far along in my thinking. I am a messy thinker, and that probably came through in my comment. I don't see any clear path forward, but see a forest with many possibilities. This does not reflect the opinions of my university or my co-workers.

I am honestly less worried about AI than the companies that produce it and the companies that fold it into educational technologies. I am not saying the companies are evil. In fact there are some educational technology companies, and many of their employees, that I respect a great deal. I am a lot less certain about the big tech companies though. For both, there are structural issues at play that are part of being a corporation, of being subject to markets, and in their interactions with governments and AI. Their behavior and objectives are at cross purposes with the institutions devoted to public education, which are subject to their own structural issues and interact with government and AI differently from for-profit corporations.

Maybe what I am looking for is an approach to the future of AI and education that is broader and takes into consideration a lot of the structural challenges that arise from these institutional interactions as well as across disciplines. I also want it to take into account the larger picture around AI in contemporary civilization. (That is why I like Runicman's book, mentioned in my original comment so much.) We also need to think in terms of many different contingencies. I realize that what I am asking is a tall order. Doing it even in the political context of one country's politics and economy may be beyond us, let alone for the world. That is a perspective I am trying to work towards, not very successfully, so, as I often say, much of what I write is muddled. When I see posts like yours, I think maybe I am panning for gold, looking for the nuggets that will help me move forward. Often I find my way forward by reacting to what you, or Suzi, or someone else has written. Thank you both for continuing to think and write about AI.

Expand full comment
Suzi Travis's avatar

What a great comment. You've covered a lot of the thoughts that crossed my mind while I read Nick's thought provoking article. There was one point, in particular, that got my attention. "As AI is applied to education, will we teach students to be too statistically predictable." This concern, I think, is one of the sneaky consequences that might be easily missed. In the controlled environment of the school classroom, predictable work might be prized. It's safe. And, according to their grades, a student might do very well being predictable. But kids (at least the ones I've had the pleasure of getting to know) don't want to find the predictable and stay safe there. They want to play and explore in their unpredictable world. To be surprised and discover the unexpected. It would be a shame if students learned that school work was for the predictable, and the real fun happens elsewhere. This is why, comments like yours and the work that Nick is doing, are so important.

Expand full comment
Guy Wilson's avatar

Suzi, thank you. I rarely deal with children under 18, so I cannot speak to that, nor to the conditions in the Australian educational system, about which I have only a very small and patchy knowledge. One situation that I can see as possible, though not inevitable, in American universities is the interaction between the need to be predictable for AI software and the high cost of an American college education, which is driving many towards STEM, Health Care, and Business degrees that are seen as safe pathways into higher paying or safe jobs, but that might not be what individual students really want to do. The latter is a driver for the need to get high grades (and of grade inflation) and, this is more a suspicion, of the student mental health crisis.

This is aggravated by increased teaching loads and the growth of the use of part-time adjuncts who have to scramble to make a living, often teaching at multiple institutions at once. Both instructors and students may be driven to rely too much on software that cannot fully deliver on its promise. Many of these applications already employ some older forms of AI expert systems that require working in certain ways and learning certain patterns. Now we are adding GenAI to that, which should be more flexible, but which seems to rely too heavily on stereotypes and sometimes shows bias or difficulty in communicating with those who exhibit non-standard dialects or, and I have not seen much work on this, who may be neurodivergent. (I realize that it can also help both of those groups communicate more effectively in writing, though it again does it through some degree of homogenization.)

This is not necessarily the path that we will see. There are a lot of creative faculty working with AI. I know some of them and think their efforts probably pay off, but what they are doing, and what it sounds like Nick is doing, is also extraordinarily time consuming. I worry about that scaling.

Maybe we will have a breakthrough, not AGI, but maybe we will get better GenAI, or some new form of AI, that is less prone to these problems. I hope we do. I also hope we can begin to approach education (in the USA) in ways that mitigate the effects of our economy or our very divisive political environment.

Expand full comment
Suzi Travis's avatar

Thanks for adding some clarity to the issues, Guy. These are indeed difficult issues to solve. Yes, I can imagine the work required for progress is time consuming and may not scale, which adds to the complexity of the issues. Like you, I am hopeful that the conversations will continue despite the costs, and solutions will be found.

Expand full comment
Alicia Bankhofer's avatar

Thanks Nick!

As always I find myself nodding all the way through. :)

I think about these questions a lot as we foreign language teachers are primarily concerned with language acquisition, communicative literacy AND cultural competence in the target language. Writing is a single aspect of leaning a language but an important one, which is why many colleagues are panicking or are reverting to traditional means of instruction, not wanting to allow students to push any buttons - thus not grasping the implications of the evolution of writing habits in society in general. To your points:

On 1: This is possible, but not very likely, at least not for a very long time. In terms of real world performance, in middle schools we see students lacking basic proficiency in their own first languages, unable to adequately write for their maturity. Also, with the considerations of age limits and data privacy of the frontier models, many are advocating for the introducing AI tools in schools at a later age - not before 14 years. Also, not all schools or students' families can afford to provide devices (which are necessary for AI exploration). Where I am, in Austria, we are "lucky" in that we have 1:1 in all middle schools. All 10 to 14 years get either tablets or laptops and have a compulsory subject of media training 1 hour in the week. So we have good circumstances, but this is not a universal picture.

On 2: I SO hope that the tertiary level will be more focused on process-oriented procedures. It makes more sense, but this is probably very difficult to implement, and many will resist.

On 3: this is already happening, so I see this as a given. Already my colleagues in Academia describe their changed workflows - with their AI as a partner, working side by side with them to research and publish papers. As you mention and an interesting thought from Ethan Mollick's book Co-Intelligence, many knowledge workers are already using many AI tools BUT they are not admitting it (taboo) for fear of criticism OR the fear that their texts will be considered of lesser value or inferior quality because an AI was involved.

On 4: I see this as well. AI will be everywhere. It will be difficult to resist.

Your writing is spot on. :)

P.S. I have actually now introduced the concept of centaurs and cyborgs to my 14-year-old students, and will mention HILP in a subsequent lesson. I think we will increasingly need to explain these concepts so that they gain competency.

Expand full comment
Nick Potkalitsky's avatar

Thanks, Alicia, for all this kind and detailed feedback.

I really appreciate your thoughts on Point 1. You think it will be that long? I very much agree with students, particularly now after the pandemic, are struggling with basic skills, competencies, and literacies. That said, I think AI in the marketplace and student workcycles significantly undermines our attempts to assess students accurately, equitably, and engagingly. I feel like in K-12, we have a choice -- keep operating as if we are actually grading student work. Or change the way we assess, factoring in the reality of AI. I am sensing that what you are saying that K-12 will just stick to its ways and the charade --- regardless of mounting evidence to the contrary. Here, I am nodding back at you. That sounds a lot like the K-12 that I know and have worked in for the past 20 years. But I will keep pushing for change nonetheless.

As to Part 2: this is the big work we need to do to respond to AI at the level of instruction and assessment. But so far, not a lot of specific practices are emerging. I am working on a post with a friend of mine. I am hoping to make some headway.

As to Part 3: I am enjoying Mollick's book. I like his advice to use AI in as many settings as possible. That is definitely what I am doing.

As to Part 4: Yes, resist seems futile, but I am cheering on those who are trying!!!

Be well!!! Nick

Expand full comment
Suzi Travis's avatar

What a great article! There are so many great topics to unpack here. One common thread seems to be the challenge in teaching critical thinking in AI-augmented education. I don't think this concern is restricted to K-12 education. It most likely extends through to the workforce. We've heard concerns about a 'digital divide'. The argument is often seen as a problem of not everyone having access to AI. But I wonder if the concern is a little more complex than that. I wonder if the ultimate 'digital divide' is between those who can use AI as a tool in their critical thinking process, and those who can't.

Expand full comment
Michael Woudenberg's avatar

Your point #4 that the ability to avoid AI will become more difficult has already been true. Even simple Google searches have used AI for two decades. You game AI when you work your SEO on your Amazon book page, you spell check using AI.

I still think that there's a space where AI won't replace the human but in a way it's fast closing. However, most users of AI don't do it right and so it becomes blatantly obvious meaning real humans can still stand out.

Bottom-line, it's a mess.

Expand full comment
Nick Potkalitsky's avatar

Yes, Michael, it really makes my head hurt some days.

Expand full comment
Riccardo Vocca's avatar

This issue is super interesting! Especially the parts about the choices professional writers will have to make in the future and the different perspectives for different institutions. Thanks for including The Intelligent Friend at the end, I'm looking forward to receiving opinions from your readers!

Expand full comment