Discussion about this post

User's avatar
Guy Wilson's avatar

I've been thinking about this post a lot the last couple of days. During part of that time, I was working on a presentation on AI and also trying to write out some of the confused muddle of feelings I have on the subject. (The latter were triggered by, but only partly about, David Runciman's 2023 book, The Handover: How We Gave Control of Our Lives to Corporations, States and AIs, which I highly recommend.) I don't know that I disagree with you fundamentally, but there are some ways that we are approaching AI in education that bother me these days. None of this is meant as an attack. I really enjoyed the essay.

One is that we are over-emphasizing writing. I realize that being a writer is at the core of the identity of most people who are writing about this, so they are going to focus on that. It is a feedback loop, and I worry that it is trapping us in too much consideration of AI. Of course we are talking about Large Language Models, but they do much more than just write. I know the programming people also write a lot about AI in education, but I don't see nearly as much about other disciplines than programming, data science, and writing in higher education. Maybe I am not looking hard enough, but I am concerned.

A second thing is around a point that many are making, but that you put very well: "Students will be required to maintain a detailed log of research insights and experiences, providing instructors with valuable information about the choices they made regarding the technology they collaborated with throughout the journey towards a particular outcome." There is a part of me that agrees with this, but a part that is perplexed and a little cynical. A good chunk of my job for the past several years has been to administer and support our university system's Turnitin instance and our online proctoring software. For years I have seen and heard the opposition of parts of the higher education community to both kinds of products. One of the criticisms is that they constitute some kind of unwarranted form of surveillance of students (especially the proctoring software). I would actually love to get rid of them. That is not happening anytime soon.

What strikes me about the proposal that students log everything they do is that we are asking them to surveil and report on themselves, internalizing the need for surveillance ever more deeply. I know this is not what is intended, but it seems an inevitable byproduct. Of course those who stray outside the lines will still log things the way they think they need to be reported, so we will not really be getting out of cheating mentality. Between this and the fact that reviewing all of this material is going to take a lot of time while overworked adjuncts multiply, we will almost inevitably see software solutions where the students are required to log all they do, but which also log their actions on the computer, then analyze that and report back. We are already starting to see that a little. Under the current economic conditions of universities, I think this is likely to spread and create a whole new surveillance regime.

I have a couple of broader concerns about AI that I think affect education in ways that we are not anticipating.

One is that AIs currently, and for the foreseeable future, need us to behave in consistent ways. They need humans who are statistically predictable. As AI is applied to education, will we teach students to be too statistically predictable. That isn't very well put, I'm struggling with the concept, but it concerns me.

The other is that we take the inevitability of AIs for granted. It may be that their rise is inevitable, but I've been spending some time looking at different factors that might slow their spread, stagnate, or face widespread rejection and hostility from society. When it comes to history, everything is contingent, but most of us do not remember that. At the very least we need to start considering what might happen under various contingencies. Suppose there are mass student protests against AI as a surveillance technology, as a technology that needs to be decolonized, over environmental/climate issues, or because of its negative impact on jobs and careers. Suppose the 2024 election is so contaminated by deepfakes that there is a mass revulsion against AI? Suppose that AI becomes so weaponized by the various world powers that it must be tightly regulated and surveilled? Those are just a few. What do those do to education? What do they do to politics? To other aspects of society?

We are restructuring education to benefit AI. What happens to education if any of those scenarios (or many others) come to pass? What would it look like if we restructured AI to benefit education?

Expand full comment
Alicia Bankhofer's avatar

Thanks Nick!

As always I find myself nodding all the way through. :)

I think about these questions a lot as we foreign language teachers are primarily concerned with language acquisition, communicative literacy AND cultural competence in the target language. Writing is a single aspect of leaning a language but an important one, which is why many colleagues are panicking or are reverting to traditional means of instruction, not wanting to allow students to push any buttons - thus not grasping the implications of the evolution of writing habits in society in general. To your points:

On 1: This is possible, but not very likely, at least not for a very long time. In terms of real world performance, in middle schools we see students lacking basic proficiency in their own first languages, unable to adequately write for their maturity. Also, with the considerations of age limits and data privacy of the frontier models, many are advocating for the introducing AI tools in schools at a later age - not before 14 years. Also, not all schools or students' families can afford to provide devices (which are necessary for AI exploration). Where I am, in Austria, we are "lucky" in that we have 1:1 in all middle schools. All 10 to 14 years get either tablets or laptops and have a compulsory subject of media training 1 hour in the week. So we have good circumstances, but this is not a universal picture.

On 2: I SO hope that the tertiary level will be more focused on process-oriented procedures. It makes more sense, but this is probably very difficult to implement, and many will resist.

On 3: this is already happening, so I see this as a given. Already my colleagues in Academia describe their changed workflows - with their AI as a partner, working side by side with them to research and publish papers. As you mention and an interesting thought from Ethan Mollick's book Co-Intelligence, many knowledge workers are already using many AI tools BUT they are not admitting it (taboo) for fear of criticism OR the fear that their texts will be considered of lesser value or inferior quality because an AI was involved.

On 4: I see this as well. AI will be everywhere. It will be difficult to resist.

Your writing is spot on. :)

P.S. I have actually now introduced the concept of centaurs and cyborgs to my 14-year-old students, and will mention HILP in a subsequent lesson. I think we will increasingly need to explain these concepts so that they gain competency.

Expand full comment
10 more comments...

No posts