The Reckoning: Sora 2 and the Year We Said Enough
"Can you tell me where we're headin'? Lincoln County Road or Armageddon?"
This past week, Educating AI hit 10,000 subscribers, and I’m grateful to each of you for being part of this conversation about AI and education.
Last week, I wrote about Disciplinary-Specific AI Literacy (DSAIL) and how we might weave AI literacy into existing subject areas, but I’m increasingly worried that our current scope and sequences simply don’t leave room for the urgent, cross-cutting AI literacy work our students desperately need.
This week’s post on the Sora 2 launch makes that need impossible to ignore.
If you’re not already subscribed, please consider joining us as a free or paid subscriber to stay connected to this critical work.
When Progress Becomes Reckless
On September 30, 2025, OpenAI launched Sora 2, and within 48 hours, the internet was flooded with AI-generated videos of SpongeBob cooking meth, deceased celebrities hawking products they never endorsed, and copyrighted characters saying things their creators never imagined. The app rocketed to #1 in the Apple App Store. The backlash was immediate and fierce.
Sora 2 is OpenAI’s flagship video and audio generation model, capable of creating synchronized dialogue and sound effects with improved physics accuracy. It launched with a new iOS app functioning as a social media platform, complete with a “cameo” feature allowing users to insert their likeness into AI-generated videos. The technology is undeniably impressive. The ethical framework? Virtually nonexistent.
In a year that has given rise to terms like cognitive debt, AI slop, and AI psychosis, the public reaction to Sora 2 feels like the culmination of mounting frustration. The unarresting “progress” of multimodal AI models is appearing increasingly reckless, and the Sora 2 launch crystallizes why people have been pushing back harder and harder all year long.
“Please, Just Stop”
Perhaps no moment captured this cultural exhaustion more powerfully than Zelda Williams’ plea to the internet.
Zelda Williams, daughter of the late actor Robin Williams, posted on Instagram asking people to stop sending her AI-generated videos of her father, calling them “disgusting” and not what he would have wanted CBC NewsTechCrunch. Her words cut through the tech industry’s enthusiasm with devastating clarity:
“Please, just stop sending me AI videos of Dad. Stop believing I wanna see it or that I’ll understand, I don’t and I won’t. If you’ve got any decency, just stop doing this to him and to me, to everyone even, full stop. It’s dumb, it’s a waste of time and energy, and believe me, it’s NOT what he’d want.”
She continued: “You’re making disgusting, over-processed hotdogs out of the lives of human beings, out of the history of art and music... AI is just badly recycling and regurgitating the past.”
George Carlin’s daughter, Kelly Carlin-McCall, echoed these concerns, reporting that she receives daily emails about AI videos using her father’s likeness. These aren’t abstract legal debates. They’re daughters watching their fathers’ legacies get puppeteered for viral content.
OpenAI’s system allows generation of some deceased celebrities like Robin Williams (who died in 2014) while blocking others like Jimmy Carter or Michael Jackson, with no clear criteria for the distinction. The dead, it turns out, have no say in how AI companies use their faces.
A Three-Day Copyright Catastrophe
Within hours of launch, users flooded the platform with copyrighted characters: SpongeBob SquarePants cooking meth, Pikachu storming beaches, Mario evading police. One viral video showed Sam Altman himself standing in a field with Pokémon characters, saying “I hope Nintendo doesn’t sue us.”
OpenAI initially used an opt-out system that placed the burden on rightsholders to request their characters not appear. After intense backlash, the company reversed course within three days, switching to an opt-in model. The Motion Picture Association’s CEO Charles Rivkin stated that “well-established copyright law safeguards the rights of creators” and that responsibility lies with OpenAI, not rightsholders, to prevent infringement.
As CNN put it: “For a brief moment in history, video was evidence of reality. Now it’s a tool for unreality.”
The speed of this reversal reveals the truth: OpenAI knew this was legally and ethically problematic. They launched anyway, banking on virality to outpace accountability. It worked. The app hit #1. But it also exposed “move fast and break things” for what it is: reckless disregard dressed up as innovation.
Why This Matters for Education
When teachers hear about launches like Sora 2, about terms like cognitive debt and AI slop, about phenomena like AI psychosis, the instinct is often to ban AI, block AI, monitor AI. To protect students by keeping them away from these tools.
This is exactly backwards.
Your students are already using AI. They’re using it at home, on their phones, with their friends. They’re using it for homework, for entertainment, for emotional support. The question isn’t whether they’ll encounter AI. It’s whether they’ll encounter it with any guidance at all.
Blocking AI in schools doesn’t protect students. It guarantees they’ll use it unsupervised. Banning AI doesn’t save them. It ensures they’ll navigate these tools without any framework for understanding them. Ignoring these developments doesn’t prevent harm. It leaves vulnerable students to figure out healthy boundaries on their own.
Students need to understand what these tools are doing to them, to society, and to truth itself. Not next year. Not when the curriculum committee approves it. Now.
Because right now, your students are using AI without understanding its effects on their cognitive development, scrolling through feeds full of AI-generated content without any ability to identify it, potentially forming unhealthy relationships with chatbots, and living in a world where video evidence can be fabricated in seconds.
They need informed, critical engagement. They need to understand how AI works, when it’s appropriate to use and when it undermines learning, how to recognize AI-generated content, what healthy versus unhealthy AI use looks like, and their rights and the ethical questions these tools raise.
The Sora 2 launch isn’t a reason to ban AI in schools. It’s a reason to teach students how to navigate a world where AI is everywhere, often invisible, and increasingly consequential.
An AI Literacy Lesson Plan
Essential Question
“When technological capability races ahead of ethical frameworks, who bears responsibility, and what should we do about it?”
Learning Objectives
By the end of this lesson, students will be able to:
Analyze the societal impact of multimodal AI systems
Evaluate competing stakeholder interests (individual creators, tech companies, the public)
Examine questions of consent, representation, and dignity in the digital age
Develop frameworks for responsible technology assessment
Part 1: Understanding Sora 2 (20 minutes)
Primary Sources:
OpenAI’s Official Announcement: https://openai.com/index/sora-2/
Safety Documentation: https://openai.com/index/launching-sora-responsibly/
Technical System Card: https://openai.com/index/sora-2-system-card/
Discussion Questions:
What capabilities does Sora 2 claim to offer? What problems does OpenAI say it solves?
What safety measures does OpenAI describe? Do you find them adequate?
Who is the intended user? What are the intended use cases?
Part 2: The Immediate Fallout (30 minutes)
Readings on Copyright:
CNBC on Motion Picture Association Response: https://www.cnbc.com/2025/10/07/openais-sora-2-must-stop-allowing-copyright-infringement-mpa-says.html
Copyright Policy Reversal Analysis: https://copyrightlately.com/openai-backtracks-sora-opt-out-copyright-policy/
Rolling Stone’s Overview: https://www.rollingstone.com/culture/culture-features/sora-2-openai-video-rollout-copyright-1235441430/
Readings on Deceased Celebrity Rights:
CBC News on Zelda Williams: https://www.cbc.ca/news/entertainment/gen-ai-zelda-robin-williams-1.7653514
TechCrunch Legal Analysis: https://techcrunch.com/2025/10/07/you-cant-libel-the-dead-but-that-doesnt-mean-you-should-deepfake-them/
Axios on Multiple Families: https://www.axios.com/2025/10/08/openai-sora-deepfakes-robin-williams-george-carlin
Additional Context on 2025’s Cultural Mood:
MIT Study on Cognitive Debt: https://arxiv.org/abs/2506.08872
The Conversation on AI Slop: https://theconversation.com/what-is-ai-slop-a-technologist-explains-this-new-and-largely-unwelcome-form-of-online-content-256554
Psychology Today on AI Psychosis: https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis
Activity: Stakeholder Mapping Have students identify different stakeholders in the Sora 2 controversy:
OpenAI/Tech companies
Content creators and artists
Families of deceased public figures
Users/consumers
Platform moderators
Legal systems/regulators
For each stakeholder, answer:
What do they want?
What do they fear?
What power do they have?
What responsibilities should they bear?
Part 3: Deep Reflection (Writing Assignment)
Choose ONE of the following prompts for a thoughtful reflection (500-750 words). Integrate quotes from at least 2 different sources into your response:
Prompt 1: On Consent and Dignity Zelda Williams wrote: “You’re making disgusting, over-processed hotdogs out of the lives of human beings... AI is just badly recycling and regurgitating the past.”
Analyze: What rights should families have to protect the digital likeness of deceased loved ones? Should there be a time limit (like copyright)? Should deceased public figures be treated differently than private citizens? What ethical framework should guide these decisions?
Prompt 2: On Innovation and Responsibility Sam Altman acknowledged the risk of “an RL-optimized slop feed” but released Sora 2 anyway. The app hit #1 in the App Store within days, then faced massive backlash and had to implement emergency policy changes.
Analyze: When tech leaders identify risks but release products anyway, what responsibility do they bear for the consequences? Is “move fast and break things” an acceptable innovation philosophy when the things being broken are trust, copyright law, and human dignity?
Prompt 3: On Cognitive Futures Research shows that AI use can reduce neural connectivity and impair memory formation. Yet AI is being rapidly integrated into schools, workplaces, and daily life.
Analyze: If using AI tools makes us measurably worse at thinking, what should we do? Ban them? Regulate them? Teach “AI hygiene”? What role should schools play in helping students navigate these tools?
Prompt 4: On Media Literacy We now live in a world where video can be generated convincingly in seconds. “Seeing is believing” no longer applies.
Analyze: How do we maintain an informed society when the basic mechanisms of truth-verification (photographs, video evidence, audio recordings) can be fabricated? What new literacy skills do we need to teach? Is this even solvable?
Part 4: Class Discussion (30 minutes)
Central Question: Is Sora 2 a tipping point?
After all the research and reflection, facilitate a discussion:
Pattern Recognition: Do you see the Sora 2 backlash as different from previous AI controversies? Why or why not?
Power Dynamics: Who has power in this situation, and is it the right distribution of power?
Your Role: As digital natives who will inherit this technological landscape, what responsibility do you have? What actions, if any, should you take?
Prediction: Where does this go? Will we regulate AI video generation? Will we adapt and accept it? Will the backlash lead to meaningful change?
Why This Matters
This isn’t about being anti-technology. It’s about being pro-human. It’s about recognizing that tools shape us as much as we shape them, and that we have both the right and the responsibility to demand better.
The Sora 2 launch revealed something important: the gap between what technology can do and what it should do is widening dangerously. And increasingly, people are noticing.
Zelda Williams put it best: This multimodal tool isn’t the future. It’s “badly recycling and regurgitating the past.” Real progress would be building technology that enhances human dignity, creativity, and cognitive capacity. Real progress would be slowing down to ask the hard questions before launching products that break trust and harm people.
The question for all of us (educators, students, citizens) is whether we’ll accept this trajectory, or whether we’ll demand something better.
The answer matters. Because the tools we accept today will shape the minds of tomorrow.
Further Reading:
CNN’s Analysis: https://edition.cnn.com/2025/10/03/media/sora-2-chatgpt-videos-deepfake-disinfo-future
For Educators: Feel free to adapt this lesson plan for your classroom. All sources are freely available online. Consider partnering with colleagues in ethics, psychology, computer science, and art departments for interdisciplinary exploration.
What do you think? Is this a tipping point, or just another moment in our ongoing negotiation with technology? I’d love to hear your thoughts.
Nick Potkalitsky, Ph.D.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy.



I tried Sora 2 and immediately saw what was going to happen. It generated team logos, stadiums, and even the correct voice of the announcers doing play by play (I made a video of the baseball playoffs reversing my team's elimination) - that was shocking but I could understand why so many people tried it out - the videos are amazingly real and the addition of audio is also a huge breakthrough (Gemini Veo had this for Pro users earlier in the summer but did not get nearly as much attention). You identified the core issue here - it almost seems as if they are courting negative publicity in favor of going viral. The fact that they seem utterly indifferent to the clear and intense public backlash that they had to understand was likely to occur is very telling. It's almost as if they are living in a separate reality. Or they are simply trying to keep pace with their competitors and offering up a free premium tool via invite was the best way to get back in the news. Either way, it's further eroding whatever trust in the company anyone has left. But I don't think this is going to be a turning point. In a few weeks, it will be yesterday's news until the next breakthrough happens from another company. The question I have is whether they will learn anything from it. It just feels like speed is the overriding motive with all the AI companies and without any real regulation it's going to continue. The financial stakes are enormous.
I haven't felt compelled to use Sora. I guess it's because I've seen the way AI can hallucinate, and if you don't have a good grasp on reality, the AI can make that worse.