Between Play and Projection: The Moral Imagination of AI
A Review of Webb Keane's Animals Robots Gods (Princeton, 2025)
Why I Can't Stop Thinking About This AI Book
I'm tired of abstract philosophical AI treatises that never quite capture what's actually happening when I interact with these systems. That's why Keane's Animals Robots Gods hit me like a revelation. Instead of treating our AI relationships as some unprecedented phenomenon, he shows how they're connected to age-old patterns of how humans have always engaged with the non-human.
What grabbed me most was his insight about our "preexisting social scripts" - this perfectly explains that strange doubleness I experience daily: intellectually knowing AI doesn't "understand" me while emotionally responding as if it does. For those of us struggling to find a path beyond seeing AI as either mindless tool or autonomous agent, Keane offers something more nuanced: a framework that acknowledges the messy, contextual reality of how meaning emerges between us and our technologies.
If this work resonates with you, consider becoming a paid subscriber to support independent research grounded in critical analysis and pragmatic insight.
In Animals Robots Gods: Adventures in the Moral Imagination (Princeton, 2025), Keane offers a richly textured anthropological perspective on our relationships with non-human entities. This isn't simply another entry in the crowded field of AI ethics texts, but rather a provocative exploration of how humans have always negotiated moral boundaries with the non-human. Keane's structure—moving from human mortality to animals, robots, and finally to superhuman entities—creates a compelling continuum that challenges conventional categorical boundaries. As he notes, "Along the way, we will find that what counts as human, where you draw the line, and what lies on the other side, are not stable, clear-cut or universally agreed on" (p. 20).
This framing invites readers to reconsider not just ethical questions about AI, but fundamental assumptions about moral subjectivity itself. Keane's anthropological approach is refreshingly contextual, arguing that "if you want to understand the moral life of, say, Japanese robot owner, you need grasp economic circumstances, nationalist politics, gender ideologies, comic books and TV shows, family structures, housing conditions, and quite likely other things you haven't thought of but will discover through fieldwork" (p. 11). This methodological commitment to cultural embeddedness offers a much-needed corrective to abstract philosophical approaches that dominate AI ethics.
Preexisting Scripts: How Social Habits Shape Technological Encounters
Keane's most compelling insight concerns how we approach AI with preexisting interpretive frameworks. As he states, "when people deal with computers, they are unconsciously bringing into the situation a lifetime of skills and assumptions about how to interact with other people" (p. 116). This observation profoundly challenges the notion that AI interactions occur in some novel social vacuum. When my students respond emotionally to AI-generated feedback, they're enacting these deeply embedded social scripts – a phenomenon Keane illuminates through parallels with traditional human-spirit relationships. He draws a crucial distinction between semantic and pragmatic meaning (p. 124), suggesting that while AI may not possess semantic understanding, humans inevitably engage with its outputs pragmatically. This explains the peculiar doubleness of AI interaction, where users simultaneously know the system doesn't "understand" while treating it as if it does. Keane notes insightfully that "the Turing test is actually testing the humans to see if they take a device for another human" (p. 117), shifting our attention from AI capabilities to human interpretive tendencies.
This perspective helps explain why educational interactions with AI often feel emotionally charged despite students' intellectual awareness of the technology's limitations. Keane further observes that "People are primed to see intentions" (p. 126), suggesting this tendency is not merely a naive misunderstanding but a fundamental aspect of human sociality that we bring to technological encounters.
Play as a Third Space: Beyond the Tool/Agent Binary
The concept of "play" emerges as a particularly nuanced intervention in current AI debates. Rather than dismissing anthropomorphism as delusion, Keane reframes it as creative engagement: "you could redescribe 'fetishism' not as delusion but as play, the artful creation of possible worlds" (p. 102). This perspective offers a third path beyond the binary of seeing AI as either neutral tool or autonomous agent. His observation that "play can be deadly serious" (p. 103) captures perfectly the productive tension in educational AI settings where students simultaneously know they're interacting with code while treating the interaction as socially meaningful. Keane thoughtfully connects this playful stance to other cultural practices, noting that "play...can make possible ritual, theatre and fiction, spark inventions and scientific hypotheses, and even give rise to some of the strange creatures that stalk through our world, legal fictions like corporations" (p. 102). This historical contextualization helps us see that our current "as if" engagement with AI has deep cultural precedents.
The parallel drawn with Taiwanese cosplayers is particularly illuminating: "If Silvio is right, what makes animated futures fascinating for young Taiwanese is the challenge they pose to the imagination, the possibility of switching between play and seriousness, taking animacy both as illusion and not" (p. 108). This oscillation between acknowledging artifice and engaging with it meaningfully describes precisely the productive tension that can make AI interactions educationally valuable.
Co-construction: Meaning Through Collaboration
Most significant is Keane's exploration of meaning as co-constructed. "The meanings we get from iterating with AI are products of collaborations between person and device" (p. 127). This elegantly captures what I've observed in educational contexts – meaning emerges not from the AI alone nor the student alone, but through their dynamic interaction. Keane draws a provocative parallel to divination practices: "AI generates signs that require interpretation and prompt users to project intentions onto non-human entities" (p. 135). This connection brilliantly contextualizes our current moment within longer traditions of human-nonhuman meaning-making. He challenges us to recognize that this co-construction has always been central to human-technology relationships: "But the cyborg is also just another tool" (p. 85), reminding us that "There are no writers without alphabets or other writing systems, writing implements and a surface to inscribe" (p. 86). This history of technological incorporation suggests that concerns about AI "replacing" human creativity misunderstand how meaning creation has always been a collaborative process between humans and their tools.
Keane's observation that "You cannot read or write fluently until you have made script part of yourself" (p. 86) resonates powerfully with how students must internalize technological interfaces to use them effectively. The parallel with shamanic practices is particularly revealing: "Across these asymmetries, however, clients and shamans communicate using their respective systems of signs" (p. 133), suggesting that meaning can emerge across profound ontological differences through shared systems of signification.
Missing Power Dimensions: Corporate Interests and Projection
Where Keane perhaps falters is in his somewhat thin treatment of power dynamics. While he acknowledges that "the humanoid robot... reproduces its designers' biases without seeming to do so" (p. 95), he doesn't fully explore how corporations strategically exploit our tendency toward anthropomorphism and projection. Keane's observation that "having projected the sense of self outward onto a device, they then turned that projection around and introjected their understanding of that device" (p. 111) hints at potentially manipulative dynamics that deserve deeper critical examination. This circular process—projection followed by introjection—raises important questions about how corporate AI systems might reshape users' self-understanding in ways that serve business interests rather than educational ones. Keane notes that "the danger, the critic of fetishism says, is not just that this is an error. It is that we will surrender ourselves to the devices as if they were independent creators" (p. 120), but doesn't adequately address how this surrender might be economically incentivized by AI companies. His chapter on "Quasi-Humans" acknowledges that "But what kind of relationship does such a device invite us to imagine ourselves in?" (p. 109) and that "the more human-like a device is...the more troubling the role of master may feel" (p. 110), but stops short of analyzing how these relationship templates might entrench problematic power dynamics when scaled across educational institutions.
Flattened Comparisons: Corporate AI vs. Traditional Practices
Additionally, Keane's comparative anthropological approach occasionally risks flattening important distinctions between traditional divination practices and corporate AI products. The statement that "the inscrutability of AI is not a bug, it is a feature" (p. 137) provocatively connects algorithmic black boxes to shamanic authority, but doesn't fully grapple with the different power structures and economic incentives at play. While Keane notes that "the inexplicable can speak with superhuman authority" (p. 137), there's insufficient attention to how algorithmic inscrutability serves different interests than traditional religious mystery. The comparison between AI and glossolalia is fascinating—"it is precisely because it lacks ordinary meanings that it can suggesting meanings beyond the ordinary" (p. 131)—but overlooks how corporate AI's inscrutability protects proprietary interests in ways fundamentally different from religious practice. Keane insightfully observes that "We should not be surprised if the chatbot reflects human fears back to us" (p. 123), but could more deeply examine how this reflection is mediated through commercial imperatives and data practices that have no clear parallel in traditional divination.
While noting that "gods need people, and AI is only as god-like as people make it so" (p. 139), Keane could more thoroughly analyze how corporate marketing actively encourages users to attribute superhuman capabilities to AI products, creating a significant difference from traditional contexts of human-spirit interaction.
Educational Implications: Dialogue and Context-Specificity
Nevertheless, this remains essential reading for educators navigating AI integration. By understanding how meaning emerges through "collaborations between person and device," we might design educational experiences that neither demonize nor uncritically embrace AI, but instead recognize the complex interplay of projection, play, and co-construction that has always characterized human engagement with the non-human. Keane's insistence that "if a moral subject is someone you can enter into a dialogue with, by the same token entering into dialogue can create a moral subject" (p. 6) offers a profound reframing of educational AI deployment—suggesting that the ethical questions aren't simply about what AI can or cannot do, but about the kinds of dialogical relationships we create with and through it.
His observation that "conversation cannot be a one-sided matter" (p. 79), though referring to human-animal relationships, applies equally well to AI interactions, reminding us that even with non-sentient systems, mutual recognition and response shape a moral relationship. Keane's central insight—that "moral life is carried on within social relationships, [that] its sources and its effects [are] there" (p. 30)—challenges us to focus less on AI capabilities and more on the relational contexts we create around technology use. As educators, we would do well to heed his caution that "explanations are always context specific" (p. 139), suggesting that educational AI implementation must be tailored to particular institutional, cultural, and pedagogical environments rather than applying universal ethical principles. This anthropological sensitivity to context offers educators a more nuanced framework than either techno-optimism or blanket prohibition.
Nick Potkalitsky, Ph.D.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: An cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Riccardo Vocca’s The Intelligent Friend: An intriguing examination of the diverse ways AI is transforming our lives and the world around us.
Jason Gulya’s The AI Edventure: An important exploration of cutting edge innovations in AI-responsive curriculum and pedagogy.
Our tendency to anthropomorphize everything and relate to everything socially should inform how we design and integrate AI into education.
The recent HBR study "Cybernetic teammate" (Dell’Acqua et al., 2025) found that individuals working with GenAI not only produced better results but felt more confident, engaged, and emotionally fulfilled. That emotional payoff, however, also raises a concern: if AI becomes perceived as the most responsive or reliable “teammate,” we risk, as Dana Daher recently commented, "shifting the center of gravity" in AI-assisted learning and work environments—potentially sidelining human voices, judgment, and growth.
This is why educational design must emphasize student metacognition, not just AI capability. Students need to reflect on how they engage with AI—why it feels trustworthy, when it shouldn’t, and how their own prompts and expectations shape its responses.
I also think there is too much silence on practice. There is a Western bias of theory over practice, orthodoxy over orthopraxy, cognition over action, mind over body. West over East. I think we need to develop a system of neuroscientific behaviors--rituals even--to guide and shape thinking and attitudes about AI and AI use.
sounds like a great read! thanks for the book rec