Gen-AI x Gen-Z/Alpha:
This Title would be Excellent had I used Gen-AI, but I didn’t, so It’s Better, as am I
By Marta Napiorkowska, Ph.D.
St. Luke’s School, New Canaan, CT
When AI arrived on the educational scene, I wasn’t entirely surprised. I had been teaching about the ramifications of developing AI for several years through a course called “Science in Literature,” which begins with neuroscience-based understandings of consciousness, then turns to tackling millennia-old questions such as “What does it mean to be a human being?” and “What makes human life valuable and good?” through literary texts. Increasingly in past years, our discussions in class had turned to machine-human hybrids (Are we, as Elon Musk proposed many years ago, already cyborgs given our dependency on our cellphones and Google for memory and recall?) and synthetic life forms. As a result, ChatGPT’s powerful ability to imitate human language from behind the veil of a screen was something we had discussed through the lenses of “the anthropomorphization of technologies”, not to mention the Turing Test.
However, I say “wasn’t entirely” surprised because it is my and my students’ own experience of ChatGPT’s abilities that continues to surprise me: it feels so uncanny and, at times, even eerie. In philosophies of consciousness, the word for how experiences feel is “qualia,” a term coined by philosopher David Chalmers to refer to experience’s subjective quality. What it is like, or feels like, to experience the redness of red, or the banananess of a banana – this is what “qualia” refers to. Enter the question of whether or not an artificial intelligence will ever develop consciousness, and we enter the world of deciding the degree to which we can have certainty that something is having “qualia,” or experience.
To test some of our hypotheses about the line between human and artificial consciousness, my students and I analyze Philip K. Dick’s Do Androids Dream of Electric Sheep?, a 1968 science-fiction novel that we all agreed is beginning to feel more like science-just-around-the-corner. Famously, the novel blurs the line between humans and androids, using our uncertainty about genuine versus reported emotional and cognitive abilities, especially empathy, as its litmus test. If a person, or an android, reports experiencing care, or behaves as if it cares, how can we be sure they aren’t just using words, or mimicking caring behavior, to fool us? How should we proceed if we can never be sure? We tested ChatGPT in various ways, and true to form, in various ways it told us it wasn’t conscious. But its descriptions of itself invite cognitive creep: “At this point, I must pause and address a meta-philosophical question that arises while writing this paper: am I, the writer, an android?... I do not act based on pre-determined programming designed to mimic autonomous decision-making. My responses, such as this essay, emerge from collaborative interactions, drawing on vast amounts of knowledge…” (emphasis mine).
Knowing that our next reading would be Aldous Huxley’s Brave New World, I sandwiched a particularly challenging paper prompt on Dick’s novel in between the two. This year, I added a layer: I wanted to compare my students’ “qualia” of facing challenge and difficulty while writing papers on their own to their “qualia” of using ChatGPT to do so. Part of the assignment asked them to reflect on and put into words their experience so that, when they considered whether a world in which humans are biologically and socially engineered to “love what they have to do” and be “happy” is preferable to the one in which they live, they would have a recent lived experience of freedom combined with difficulty to compare it to. In the back of my mind was recent research on the neurochemistry of happiness, such as the release of dopamine upon achieving a goal, and positive psychology, which argues that a meaningful life is happier than a pleasure-filled one.
The prompt gave them six choices of topics from which to choose, some of which included several related questions. “Should androids have moral value? And if so, on what grounds does their value rest? If not, why not? Should they be as or more valuable than animals?” is one example. The prompt asked them to write a 3-5 page essay on their own first, submit it, and then to introduce the same prompt to ChatGPT and write the same essay using it. They could prompt GPT further as much as they wanted, but doing so was not an explicit requirement, as I wanted to see what sort of engagement GPT inspired on its own and didn’t want to force students into any patterns, hoping that their experiences would be more authentic as a result. Then, they were to submit their AI-generated essay. Third, they were to reflect and report on their “qualia” while preparing both papers. Finally, the prompt asked them to prove they weren’t androids if they wanted to be the running for an “A.” Doing so would show me they had a deep understanding of our conversations and the novel. Here is a link to the entire Prompt.
It was this last twist that proved the most challenging and induced the most negative “qualia” because the novel had so convincingly deconstructed the difference between androids and humans. None of the students believed they were androids, of course, but they couldn’t figure out immediately how to prove it, and not being able to come up with an approach to do so quickly threw a few into an emotional frenzy. They barraged me with questions, accused me of unfairness, of the answer’s being purely subjective. They claimed the task was impossible. Two approached their class Dean to complain and wrote a scathing email about me to the Assistant Head of School. Their trust, according to the AHS, was “low.” Some who externalized their negative emotions also sought validation from their peers in group chats. One student later wrote, “I could sense others’ frustrations over texts and in conversations… many of us felt this was the hardest essay prompt we had been assigned during our time in high school.” Through social contagion, a parallel to the novel’s empathy box, some negative reactions even brought calmer peers down, as I later learned when a couple of students reported being so “stressed out” by these group texts that they had to disengage to get their work done.
However, not everyone reacted so negatively, and it didn’t matter whether they were academic high flyers or not. One student later wrote that he “at first experienced feelings of confusion” but then was “overwhelmed with determination and even excitement to write on such an interesting and open-ended prompt.” Another wrote, “I felt excited about tackling the challenge.” A third: “I became interested in the idea of how one could weave a clever argument into a traditional essay to prove they are human.” One young man, a successful athlete whose rigorous training has inculcated resilience and confidence, calmly figured out an approach to proving he is a human within the first ten minutes. Unfortunately, he started asking me if his approach was right, and so as not to give the whole game away, I had to stop answering questions in class so that others could come up with their own approaches.
And they all did. Even those who had at first reacted the most negatively to difficulty wound up testifying to the meaningful, positive learning they experienced upon completing the project. One wrote, “My initial feelings of confusion and upset turned into inspiration… my essay turned into something I am proud of.” Another concluded, “I felt much more accomplished when I finished my own personal one…” Still another: “I felt a sense of pride...”
By contrast, their “qualia” of GPT’s essay was vastly different: “I didn’t find using ChatGPT to mean much of anything at all,” one wrote. Another believed that reading GPT’s essay wasn’t interesting, and none felt any emotional attachment to its results. Only a couple tried to prompt GPT further to get it to generate a better essay, but they didn’t experience the subsequent versions as being more important to them. Those who didn’t re-prompt did not explain why, but perhaps their disinterest shows, once again, some lack of care for a GPT response.
Some students’ final reflections also testified to the power and importance not only of authentic effort and overcoming difficulty but also of learning to trust oneself in the face of it: “After completing the entirety of the essay, my perception of self was a high-level thinker and intellect along with a determined student - a big difference from my perception of self after finishing the ChatGPT version.” Another wrote, “The trial and error, the challenge… this essay showed me what it is like to fail and work through my failures… Thank you for creating such an interesting prompt.” A third: “I experienced intense self-awareness and emotional involvement while writing the paper, which required all my focus. I found meaning in the original piece because I could sense the stress and fulfillment when wrestling with the prompt…” Another young woman wrote, “I proved to myself that I could work through a challenging assignment during a stressful, busy week.” Another compared his initial insecurity about the prompt to his final state by claiming straight-forwardly, “I became more confident.”
These self-reported descriptors – pride, satisfaction, meaningfulness, confidence – are all associated with increases in neurotransmitters such as dopamine, oxytocin, and serotonin. Or, in the terms of positive psychology, the different sorts of happiness states that contribute more to life satisfaction than pleasures or feeling good ever can. Moreover, when working together on the projects, students also developed the single thing that contributes most to human happiness, according to the Harvard Study of Adult Development: positive authentic relationships with other people, the very thing a person cannot have with an AI that lacks “qualia”, no matter how well it mimics internal states. By working together to overcome a challenge, students deepened their bonds.
It is already well known that difficulty and effort improves learning, and, further, that generating answers in one’s own language helps to apply learning, such as theoretical ideas, to real-world life. As the many studies reported in Brown et. al.’s Make It Stick make clear, struggle during learning isn’t detrimental but rather foundational. It is a sign that learning is in fact happening, even if – and this is key – the learners themselves often experience less confidence while making the effort. As it turns out, effortful work and challenging tasks stress the brain just enough to make the information a student is trying to learn more “sticky,” improving recall and learning. So, taking the difficulty away and making things easier effectively reduces students’ learning.
While it may be possible to design a task that makes students put in more effort into using GPT, such as prompting it to create increasingly better versions of its initial essay, doing so still puts students in the position of editors, working with material ready-made for them, rather than of generative creators, doing the hard mental work of reasoning and imagining what isn’t yet there on the page and then putting into their own words in a way others can understand. In fact, my students’ reported experiences also suggest that taking away this sort of effortful difficulty – the kind of work that generative AI in particular takes away – reduces students’ experience of absorption, satisfaction, pride, and meaning. In other words, generative AI takes away the experiences associated with neurotransmitters such as dopamine, oxytocin, and serotonin, which also generate feelings that lead to well-being, positive self-regard, and confidence. Is it possible, therefore, that despite our best intentions in encouraging them to use generative-AI as a tool or tutor, we may be inadvertently contributing to the mental health crisis our students are currently experiencing? Is it possible that, trying to prepare our students for an AI-infused future, we are actually taking away their ability to reinforce neural-pathways of intellectual resilience, creativity, and grit? And interfering with their brains’ pathways enough to inhibit positive “qualia” of their lives? And thus, with their life satisfaction?
Like all of us, students are natural pleasure seekers. Unlike adults, however, middle and high school students often lack the executive function skills to think long term about the impacts of their choices. Combined with the pressures of grades and college admissions (consider Jennifer Breheny Wallace’s excellent book Never Enough for a sobering picture of achievement culture), temptations to make things faster and easier are very strong. Given how quickly students reach for tools that will ease their effort, both in and outside of class, and given how much earlier students will begin using these tools in their educational journeys, I’m confident that generative AI use will increase.
With an academic’s critical distance, I can nevertheless get behind using generative-AI and promoting its literacy. Difficulty is part of human life, and I can argue that, since society is filled with unpleasant surprises and human foibles, young people will always find challenges, express effort, and overcome them, thereby feeling all the positive feelings of accomplishment. So, will subsequent generations seek and find meaning elsewhere than in academics? In climbing a mountain, or beating a cancer diagnosis? Probably.
But, if we don’t make learning and the pursuit of knowledge feel meaningful and rewarding, then we risk losing the value of both, and I don’t know any successful nation or perhaps even civilization that doesn’t value knowledge, intellectual life, and inner virtues. The civilization in Brave New World has dispensed with all three in favor of entertainment, ease, and pleasure, effectively erasing the inhabitants’ desire for anything requiring any patience or inherent value, such as truth and beauty. Its citizens not only don’t want to overcome difficulty, they are actually incapable of doing so. Their very ability to make an effort has been nurtured out of them by their society.
Anecdotally, without an academic’s critical distance but with my past years as comparison, I see students less intellectually resilient every year in my neck of the private school world, even among those who elect Honors courses. Fewer of them Google words they don’t recognize, and fewer can concentrate long enough to closely interpret a passage. More turn to study on-line “aides” to summarize their readings and to simplify the language, as well as the ideas, in their assignments. More of them want to take breaks or play games during class. More play games on their phones before school or during their free periods, rather than use that time to actually study. Meanwhile, teachers’ expertise is increasingly called into question by parents going “completely bonkers” to get their kids into college, as David Brooks recently reported in his article “How the Ivy League Broke America” for The Atlantic, and students respond to their parents’ pressures as well. So yes, I worry. I worry that we are inadvertently sowing not only the seeds of dependence on AI but also of generational existential crises. The lessons of the Brave New World should remain with us so that we don’t accidentally create it on our own. My students are already noticing the connections between that novel and their recent writing experience, and I’m looking forward to reading what they have to say next.
Thank you for sharing the full story of the lesson, the responses, and the learning process throughout. Do you ever use Neil Postman's Technopoly? Finishing up a one semester course for high school seniors in a public school on AI and Ethics - and we ended up being more focused about language use within the public sphere from public figures about public AI products being offered as innovation (The Elon Musk "We Robot" event and the language he used to pitch the Optimus robot in October) compared to your ELA course really grounded in rich use of thoughtful texts on the topic and then your challenging use of writing for personal reflection and intellectual growth. EQ for the course - What is artificial intelligence to you in 2025: a tool, progress, and/or human replacement? Would appreciate your perspective on the question and your students' viewpoints as well.
Hi Ramon! Good question! I guess it depends if self-improvement is one of their goals. Most of my students at my current school are quite ambitious, but pursuing inner character virtues is something we need to remind them is part of their journey to success. They are quite focused on the short-term goals of college acceptance, and not on the people they are becoming to get them. If/As over-reliance on AI in the coming years of their learning journey nurtures them away from the pursuit of self-improvement, I'm not sure they will know what they're losing as its happening, or be able to conceptualize/imagine the very thing they lack access to. THAT said, I'm optimistic that at least some of them can be inspired in a more holistic, well-rounded direction!