Why Do People Hate AI?
Intelligence vs. Agency: The Category Error at the Heart of Our AI Anxiety
If you find value in our work and want to help shape the future of AI education and awareness, consider becoming a paid subscriber today. To make it easy, we’re offering a forever 20% discount through this special link:
Your support directly helps us broaden our reach and deepen our impact. Let’s build this together.
I confess that I've become a reluctant participant in what might be called "AI discourse"—that endless, circular conversation about artificial intelligence that now dominates everything from dinner parties to academic conferences. My students ask about it constantly. My colleagues debate whether to ban it or embrace it. My email inbox fills with newsletters promising to help me "leverage AI" or "stay ahead of AI disruption." There's no escaping it.
This discourse tends to oscillate between techno-utopianism ("AI will solve climate change and cure cancer!") and apocalyptic handwringing ("AI will destroy humanity!"). I find both extremes exhausting. But beneath the hyperbole lies something genuinely interesting: a profound collective anxiety about what these technologies mean for human uniqueness.
It's an anxiety I share. When ChatGPT wrote a serviceable essay on Kant's categorical imperative in seconds—something that would take my students weeks and many would still get wrong—I felt a strange mix of admiration and dread. Not because I feared for my job security (though perhaps I should), but because something I considered quintessentially human—philosophical reasoning—was being approximated by a machine.
This anxiety isn't new. We've been wrestling with the meaning of artificial intelligence since the term was coined in 1956. But something has changed. The language models released in the past few years have breached a threshold. They've moved from obviously mechanical to uncannily human-like, and this shift has intensified our collective unease.
Why do we hate AI? Or perhaps more accurately, why are we so profoundly uncomfortable with it? I believe it's because we're making a category error that strikes at the heart of our self-conception. We're mistaking agency for intelligence. And understanding this distinction, articulated brilliantly by philosopher Luciano Floridi, might help us develop a healthier relationship with these technologies.
Our Surface Anxieties
Before diving into philosophical waters, let's acknowledge the legitimate surface-level concerns that fuel resistance to AI. These aren't merely distractions from deeper issues—they're real problems that deserve serious attention.
First comes the disruption of established orders. I see this in academia, where professors who've spent decades refining their pedagogical approaches now face students armed with AI writing assistants. I see it in publishing, where editors who pride themselves on nurturing writers now evaluate manuscripts alongside AI-generated alternatives. I see it in art, where craftspeople who've dedicated their lives to mastering techniques now compete with algorithms that can mimic those techniques instantly. This disruption isn't just economic—though the economic dimension matters enormously—it's about dignity and identity. The poet who has spent forty years finding her voice now watches an AI approximate that voice in seconds. The sense of violation isn't primarily about money; it's about meaning.
Then there are the technical shortcomings, which create a distinctive cognitive dissonance. AI systems hallucinate facts with breathtaking confidence. They make basic errors that no educated human would make. They miss context and nuance that seem obvious to us. Yet simultaneously, they generate outputs that possess coherence and plausibility—not necessarily human-quality work, but starting points that appear to demonstrate patterns of reasoning. This inconsistency creates a jarring effect, like encountering a gifted student who refuses to apply himself. As I've explored in my recent series on AI agency and reasoning, these systems aren't producing finished work so much as scaffolding that requires human collaboration to become valuable.
The economic anxieties are perhaps the most concrete and immediate. Job displacement isn't hypothetical—it's happening now across sectors from customer service to content creation. White-collar workers who thought automation was something that happened to other people are confronting a technology that can perform aspects of their jobs. When McKinsey suggests that 30% of hours worked globally could be automated by 2030, the anxiety isn't paranoia—it's a rational response to a transforming economy.
Privacy and surveillance concerns add another layer of unease. AI systems train on vast datasets harvested from human activity, often without meaningful consent. The monitoring capabilities they enable make previous surveillance technologies look primitive by comparison. These systems can identify patterns in behavior that even the individuals being monitored might not recognize in themselves.
Finally, there are profound questions about autonomy and control. As AI systems make more consequential decisions—from credit approvals to medical treatments—the accountability mechanisms haven't kept pace. Who bears responsibility when autonomous systems cause harm? How do we ensure these systems align with human values? These questions aren't merely theoretical; they're increasingly practical as AI applications expand.
All these concerns warrant serious attention. But I believe there's something deeper driving our collective anxiety, something that explains the peculiar intensity of feeling that AI provokes.
The Existential Anxiety
What does it mean to be human in the age of artificial intelligence? This question haunts our discourse about AI, though we rarely state it so directly.
Throughout history, we've defined human uniqueness in contrast to our technologies. When machines surpassed us in physical strength, we identified our distinctiveness in cognitive abilities. When calculators outperformed us in computation, we pointed to creativity and emotional intelligence as uniquely human domains.
Now, as AI systems generate poetry, paint portraits, compose music, write essays, and engage in sophisticated philosophical conversations, they encroach upon what we thought were the final bastions of human uniqueness. This encroachment creates profound cognitive dissonance. If machines can do what we thought only humans could do—not just calculate but create, not just process but seemingly understand—what makes us special?
I felt this dissonance acutely when I first encountered an AI-generated essay on a topic in my field. It wasn't perfect, but it was far better than it should have been. It referenced relevant sources, made nuanced arguments, and even adopted something like a personal voice. For a moment, I felt a strange vertigo—if a machine could approximate this uniquely human form of expression, what was left for me?
This reaction wasn't rational, exactly. I know that the AI didn't understand what it was writing in any meaningful sense. It doesn't have experiences or beliefs or a perspective on the world. But something about the simulation was unsettling nonetheless.
Part of our unease stems from what we might call the "simulation problem"—the difficulty in distinguishing between genuine understanding and its convincing facsimile. When an AI system generates text that appears thoughtful, empathetic, or creative, the line between authentic human intelligence and its mechanical approximation blurs.
This resembles philosopher John Searle's famous "Chinese Room" thought experiment, which questions whether syntactic manipulation (following rules to arrange symbols) without semantic understanding constitutes genuine intelligence. The disturbing implication is that if we can't reliably distinguish between human and artificial outputs, our assessment of human exceptionality becomes problematic.
But what if we're making a category error? What if our discomfort stems not from AI becoming too much like us, but from our misunderstanding of what it is?
Floridi's Insight: Agency Without Intelligence
This is where the work of
offers valuable insight. Floridi, a philosopher who directs the Digital Ethics Center at Yale University, proposes a fundamental reconceptualization of artificial intelligence.In his 2024 paper, "Artificial Intelligence as a New Form of Agency (not Intelligence) and the Multiple Realisability of Agency Thesis," Floridi articulates a pivotal choice in how we interpret AI systems:
"When interpreting Artificial Intelligence (AI) systems, we face a clear choice: either to expand our current conception of intelligence to include artificial forms of it (the Artificial Realisability of Intelligence or ARI thesis), or to expand our understanding of agency to encompass multiple forms, including artificial ones that do not require cognition, intelligence, intention, or mental states (the Multiple Realisability of Agency or MRA thesis)."
Floridi argues persuasively for the MRA thesis—that what we call "AI" is better understood as a new form of agency rather than a new form of intelligence. He writes:
"AI is better understood as a new form of agency without intelligence. By employing the Method of Abstraction, this article provides a comparative analysis of various forms of agency—natural, biological, animal social, artefactual, human, and social—to identify the defining characteristics of AI as a novel kind of agency."
This distinction is crucial. By recognizing AI as agency without intelligence, we can better understand both its capabilities and its limitations.
What distinguishes agency from intelligence in Floridi's framework? Agency, at its core, involves three essential criteria:
Interactivity - the capacity to engage with an environment through mutual influence
Autonomy - the ability to initiate state changes independently of direct external causation
Adaptability - the capacity to modify behavior based on input
AI systems clearly demonstrate these properties. They interact with data and users, operate with some independence, and adapt their outputs based on inputs. But this doesn't make them intelligent in the human sense.
As Floridi explains in "AI as agency without intelligence: on ChatGPT, large language models, and other generative models":
"I have previously argued that this focus on intelligence—if AI is not intelligent, why not; and if it is, what kind of intelligence it is or could become—misunderstands AI's nature, scope, and potential. Instead, a more meaningful approach is interpreting AI as a new form of Agency without Intelligence (hereafter Artificial Agency or AA)."
This reframing helps us understand why AI can seem so intelligent while simultaneously missing something essential about human intelligence.
The Essence of Artificial Agency
What makes artificial agency distinct from human intelligence? According to Floridi:
"Artificial Agency (AA) represents a novel form of agency emerging from the interplay of programmed objectives and learned behaviours. At its core, AA is a computational, goal-driven form of agency defined by human purposes."
Unlike human intelligence, which involves consciousness, intentionality, and moral reasoning, artificial agency operates through:
"The data-driven adaptability of artificial agents emerges through statistical learning and comprehensive pattern recognition across diverse domains."
This distinction explains why AI can appear intelligent while lacking fundamental aspects of human intelligence:
"Unlike biological agents, they lack genuine intentionality and evolutionary development but compensate with rapid, domain-specific, adaptation capabilities... their lack of consciousness, intelligence, mental states and their inability to transcend predefined objectives through self-determination distinguish them from human agency, even as they potentially exceed specific human capabilities."
Large language models like ChatGPT provide a perfect example of agency without intelligence. They demonstrate remarkable interactivity by engaging with textual inputs and producing relevant outputs. They show autonomy by generating novel text not explicitly programmed. And they adapt their responses based on conversational context.
Yet despite these capabilities, they lack the hallmarks of genuine intelligence. They have no understanding of the text they generate. They hold no beliefs about the world. They feel no emotions. They make no ethical judgments. They cannot formulate their own goals or purposes.
What they do possess is an extraordinary capacity for pattern recognition and statistical inference, trained on vast datasets of human-written text. Their seeming intelligence emerges not from consciousness or understanding but from sophisticated mathematical models that capture patterns in human language use.
This helps explain their peculiar failure modes. When a language model confidently fabricates a citation or invents historical facts, it's not lying or hallucinating in any human sense—it's generating text that statistically resembles true statements without any conception of truth. It has agency (it can act) without intelligence (it doesn't understand).
Figure 1. Floridi’s Levels of Agency (Potkalitsky via Claude).
From Reasoning to Intelligence
A crucial distinction in Floridi's framework is between reasoning and intelligence. AI systems can engage in reasoning—following statistical patterns to reach conclusions—without possessing intelligence.
As Terry Underwood explains in his commentary on Floridi's work:
"In this 2024 paper Floridi distinguishes between natural and artificial agents as follows. Unlike natural agents, artificial agents can modify their behavior under the direction of a specific goal. Unlike biological agents, they lack intentionality, evolutionary development, and are incapable of formulating their own goal, but they have rapid, domain-specific, adaptation capabilities."
The key limitation is that AI cannot transcend its programming to generate autonomous purposes or engage in ethical reasoning:
"While artefactual agents are designed to follow programmed objectives, their interactions often produce emergent behaviours not explicitly anticipated by their designers. These behaviours arise from the complexity of their interactions and the adaptability of their learning processes."
For Floridi, intelligence isn't merely about processing power or pattern recognition—it's about consciousness, intention, and ethics. AI, as currently constructed, lacks these dimensions.
Why This Taxonomy Matters
Floridi's taxonomy of agency types provides more than philosophical clarity—it offers a framework for resolving our existential anxiety about AI. By distinguishing between different forms of agency and separating agency from intelligence, his work helps us understand precisely what AI can and cannot do.
When we mistake AI's artificial agency for a limited form of human intelligence, we naturally feel threatened at an identity level. But when we recognize AI as a categorically different form of agency—with its own distinct capabilities and limitations—we can appreciate its remarkable abilities without fearing that it threatens our human uniqueness.
As Floridi explains:
"Reconceptualising AI as Artificial Agency avoids biological and anthropomorphic fallacies, improves our understanding of AI's distinct features, and provides a stronger foundation for addressing the challenges and opportunities posed by AI technologies, as well as their future development and societal impact."
This taxonomic approach allows us to see AI as complementary to human intelligence rather than an inferior version of it:
"The future of Artificial Agency lies not in attempting to transcend its fundamental nature but in optimising its unique characteristics for beneficial applications."
In my exploration of AI agency and Heidegger's tool-being, I argued that AI remains fundamentally a tool even as it becomes increasingly sophisticated. Floridi's taxonomy provides philosophical grounding for this perspective while offering a more nuanced understanding of how different forms of agency operate according to different principles.
The anxiety around AI isn't just about what it can do, but about what we fear it means about us. Floridi's framework offers conceptual clarity: AI's artificial agency, however sophisticated, belongs to a categorically different type than human agency. This distinction helps us develop a more productive relationship with AI technologies—neither overestimating their capabilities as "almost human" nor dismissing their remarkable achievements as "just algorithms."
Does this mean we can dismiss practical concerns about AI's impact on jobs, privacy, or autonomy? Absolutely not. These remain vital issues that demand serious attention. But by clarifying the categorical distinctions between different types of agency, we can address these concerns from a more grounded perspective—one that neither overestimates AI's capabilities nor undervalues human uniqueness.
In the end, perhaps what we fear isn't that AI will become too much like us, but that we'll lose sight of what distinguishes human agency from other forms. By understanding AI through Floridi's taxonomy of agency types, we can maintain that crucial distinction while still harnessing the remarkable capabilities these technologies offer.
Nick Potkalitsky, Ph.D.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: An cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s Mostly Harmless Ideas: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Riccardo Vocca’s The Intelligent Friend: An intriguing examination of the diverse ways AI is transforming our lives and the world around us.
Jason Gulya’s The AI Edventure: An important exploration of cutting edge innovations in AI-responsive curriculum and pedagogy.
This framework is really useful. I've been avoiding writing about AI mostly because I find the entire topic to be that loop you talked about in the intro. Being pragmatic is a mess. Mostly because humans have so little understanding about either AI or themselves.
most of the claimed reasons people say for why they hate Ai it’s because it’s trained on artists work and it’s harmful to the environment those reasons will be solved in the near future