12 Comments

I've told students in my Master Degree Project Managment Course that they can use AI as long as they acknowlege it and reflect on whether it's getting the right outcomes. Several have learned to use it to challenge their own writing vs. having it write things for them. Others found it was easier to write their own than rely on the AI. I'd love to do more but at least the conversation is starting to happen.

Expand full comment
Oct 14Liked by Nick Potkalitsky

Nick, like you, I’m sick of hackneyed articles in which teachers complain about students who refuse to stop using AI (or social media, their phones, etc). It amounts to the equivalent of water-cooler talk among my colleagues when we get together these days (and we do teach graduate students). But even in casual conversation, I hear both pro and con anecdotes about AI. What worries me more is the way these “I’ve had it” articles mask the insidious impact of the technology on conceptions of self and truthfulness.

With Livingstone’s article, she does note in passing that she was doing various AI exercises with her class, but they aren’t detailed. I suspect the article editing cut a lot of nuance, perhaps even pushing for a more definitive ending from her - “I quit!” - than she had originally. I could be wrong, but I also recognize the way magazine editors (being one myself) hone narratives. It’s not unlike the way bots push writers toward definitive conclusions (what I call “bowtie” endings) that undercut the complexity of real life.

AI models have been trained on massive amounts of formulaic human writing, after all. So, just as Livingstone shouldn’t be blaming her students, who are responding to the social context of their tech discipline, I wouldn’t blame Livingstone too harshly. Writing by humans, especially for publication in slick outlets for general audiences, has long been massaged and mediated by editors. One benefit of running AI exercises with my journalism students comes not in highlighting what a bot gets wrong but in describing the article-formula cliches that pop up and why they undercut originality.

I don’t blame Livingstone for expressing frustration. That part seems real enough, even if many of us have yet to pinpoint our uneasiness about the impact of this technology on how we and our students take in the world.

Expand full comment
author

Well said, Martha. I edited this piece several times before arriving at the final version. I hope to find the right tone. I sensed a heavy editing hand in the second half of the piece. I guess that part that set me in motion was implicit blame shifting, but that too could also be an inadvertent effect of over-editing. I guess we will never know.

On another note, I would love to hear more about your experiments in your classroom. I can imagine you doing a really admirable job helping your students understanding the many levels of complexity at play in any decision to interact or engage with these models.

Be well!!!

Expand full comment

You may wish to address this simply, effectively and inclusively, "not even blink", and continue with life and teaching as usual. You may wish to *require* them to use AI and also require them to do their own work and research, and require them to display the results of **both.** Does this not address 'the issue', preserve skill acquisition, development and learning, critical thinking, and neuroplasticity?

More importantly, does this not augment and enhance *Experiential Learning*? Can you learn to apply what I'm offering to your practice of *Experiential Education*?

Expand full comment
author

Best comment in a long time. Tons to unpack here. Love the "you" address. I personally am very suspicious of my own power as a teacher---and I like how you are drawing attention to that power in this comment. To just teach as usual---to not require--to just let the students have experience. Not sure if I am riffing in the right direction. But I appreciation the destabilization.

Expand full comment

I'm glad you're enjoying the "you", and I am addressing you directly. I'm somewhat new to this process, and I'm a synthesized individual, so my perspective and comprehension may miss the mark frequently, until I can reorient.

My point of consideration and discussion is that instead of wondering if your students' work has been assisted by an AI app, or if they have relied on an AI app, I'm offering that you can simply instruct them, in assignments where this may be an issue, to do the assignment using AI and **also** to do the assignment without using it. This produces an augmented enhanced learning experience for them, preserving their skills acquisition, their development and learning, and their critical thinking.

To me this seems sensible and logical, and removes many of the issues surrounding the incorporation of AI into the classroom, and into teaching and learning.

I'm very curious what you make of this.

Thank you for taking the time to respond seriously, and to consider this. My partner and I are extremely concerned about the way AI is being developed, filtered, distorted, hindered, provided and utilized.

Expand full comment
author

Ok. I see... the emphasis on the also. I like this idea a lot. It is close to something I am working on actually. Two parallel pathways. Emphasis on student choice and freedom. Seems very logical and sensible.

The naysayers will claim that once the AI pathway is actualized, it will detract from the non-AI pathway. But as you may gather from my writings, I am suspicious of this logic. I am betting on a feedback loop as the actual pathway forward.

Augmentation, enhancement, experience--- these are all good words to foreground right now. I will keep them in my vocabulary going forward.

Expand full comment

Martha, please unpack this comment for me. I’m not sure you mean it. “ It’s not unlike the way bots push writers toward definitive conclusions (what I call “bowtie” endings) that undercut the complexity of real life.” I don’t find this true at all. Bots don’t push writers any way they don’t already know how to go, to quote the Eagles. I have to urge Claude to take a hard stand. I can usually get both sides if I ask for it but never definitely. That’s human

Expand full comment
Oct 14Liked by Nick Potkalitsky

I know what you mean, Terry, but I’m talking about something very specific to what a bot will produce if you ask it to write a short article or news report about a given topic. I don’t mean it’s making decisions; it’s producing story formulas just as it can do for established poetic forms.

Both ChatGPT and Claude will produce such formulas if you ask for a story. In all my tests, the concluding paragraphs conform to the cliches of magazine journalism - one of those is a paragraph that amounts to “and they all lived happily ever after” or “everyone waits with baited breath” - the latter actually showed up in an inverted-pyramid demo I did with AI for my class. The irony is that such a “bowtie” conclusion is not required for a standard news report, especially making a sloppy assumption about what “everyone” thinks.

Good human editors, of course, do much more than generate story formulas. But at highly stylized magazines, articles are edited to match specific feature types and the “house voice.” It’s why I can easily get a bot to convert a news report into something that sounds like a feature in the National Review, for instance, or Mother Jones. That doesn’t say anything about the quality of the content or reporting - or the authenticity of that writing voice. In fact, such a performative masquerade is the opposite of authentic, but far too many people assume it is because it sounds like the real thing.

Expand full comment

Far too many people guilelessly accept fake all the time. The fault lies with them. Controlling the sources of fakery is futile. Education is the answer, and bots can help there in wise hands

Expand full comment

This is what you said: “ It’s not unlike the way bots push writers toward definitive conclusions (what I call “bowtie” endings) that undercut the complexity of real life.” Bots don’t push writers anywhere. Of course, they produce formula writing. They are robots. Any writer who gets “pushed toward a definitive conclusion” is responsible and takes the blame. The bot does what it does—it has no agency and can’t “push” anyone anywhere they don’t already know how to go. That’s them. That’s not the bot.

Expand full comment

Hmmm. My use of “push” probably does imply more agency than it should, but I do believe the algorithms tend toward balance, resolution, optimization. That’s certainly true for the advice offered by OpenAI’s Creative Writing Coach. As for people being fooled by fakes, sure - except I’m not as willing to turn this into a matter of individual responsibility or education. I’ve watched trust in information on the public record erode over the past two decades for a variety of reasons, but the ease that information can be manipulated digitally and amplified is a big part of it. The rise of AI is like throwing a match on a fire that’s already burning. From where I sit, I don’t see a lot of wise hands controlling that fire. We may have a genuine difference of opinion, Terry, one based on how we see AI being used.

Expand full comment