Is Language AI's Ultimate Limit?
A response to Dr. Lance Cummings' question about AI's language limitations, pivoting to questions of embodied meaning
HOLIDAY FLASH SALE - 65% OFF! (Ends Dec 31)
ADVANCED AI COURSE: $69 (Save 65%)
School Teams Get FREE Workshop!
nicolas@pragmaticaisolutions.net
In a thought-provoking LinkedIn post,
recently raised an intriguing question: “Are AI systems fundamentally limited by language itself?” This query arrives at a crucial moment in AI discourse, offering a refreshing perspective beyond the prevalent scaling versus optimization debate dominating tech media. To understand the depth of this question, we need to revisit a fundamental concept in AI and cognitive science: the symbol grounding problem.The Grounding Problem in AI
The symbol grounding problem, first articulated by philosopher Stevan Harnad, asks how symbols used by AI systems acquire meaning. In human cognition, words and symbols are "grounded" in sensory experiences, emotions, and physical interactions with the world. A child learns what "hot" means not just by hearing the word, but by touching something warm or being warned away from a stove. This direct connection between symbols and real-world experiences provides the foundation for genuine understanding.
Frank O'Hara captures this uniquely human capacity for grounded meaning:
Oh! kangaroos, sequins, chocolate sodas!
You really are beautiful! Pearls,
harmonics, jujubes, aspirins! all
the stuff they've always talked aboutstill makes a poem a surprise!
These things are with us every day
even on beachheads and biers. They
do have meaning. They're strong as rocks.
O'Hara's catalogue of objects demonstrates how human meaning transcends mere symbolic representation. Each item evokes not just a definition but a constellation of sensory memories and lived experiences. When he declares these things "strong as rocks," he argues for meaning anchored in the tangible world.
Modern AI systems, particularly large language models (LLMs), process language through statistical patterns and correlations in vast amounts of text. They could analyze O'Hara's poem for its patterns and themes, but they cannot access the embodied experiences that make kangaroos, sequins, and chocolate sodas "beautiful" in the way O'Hara means. They operate in what philosophers call an "ungrounded symbolic space."
Symbolism vs. Connectionism: Two Approaches to Understanding
The symbolic approach, rooted in classical AI, views intelligence as symbol manipulation according to explicit rules. This perspective naturally emphasizes referentiality – the idea that symbols must refer to specific things in the world. While symbolists don't necessarily require direct grounding, their framework inherently raises questions about how symbols acquire meaning.
Connectionism, exemplified by modern neural networks, takes a different approach. It focuses on emergent patterns from interconnected networks of simple units, similar to biological neurons. Connectionists argue that meaning can emerge from the statistical relationships between patterns, potentially sidestepping the traditional grounding problem. However, this raises new questions about the nature of understanding without explicit reference.
Modern Perspectives and Future Implications
Recent work by Professor Raphaël Millière on vector grounding offers an interesting bridge between these approaches, suggesting ways that distributed representations in neural networks might achieve a form of grounding through their geometric relationships in high-dimensional space. Meanwhile, veterans of classical AI continue to emphasize the importance of referentiality, arguing that true intelligence requires explicit connections between symbols and their real-world referents.
The emergence of sophisticated AI language models has prompted educators like Terry Underwood to advocate for somatic writing as a vital pedagogical response. In an era where AI can generate endless variations of text based on existing patterns, Underwood's approach asks students to anchor their writing in the irreplaceable substrate of bodily experience. When students write from somatic experience, they're not just producing text; they're documenting the very process of meaning creation that AI systems cannot replicate.
Looking Forward
Current AI systems, whether symbolic or connectionist, demonstrate remarkable capabilities in language processing yet face a fundamental challenge: the gap between processing and meaning, between syntax and semantics.
While some researchers envision a hybrid approach combining symbolic reasoning with connectionist pattern-recognition, others suggest we need an entirely new paradigm. But perhaps the most profound question isn't whether AI can understand like we do, but whether we humans can ever escape what philosophers since Kant have called correlationism – our fundamental inability to access reality except through the mediating structures of our own minds.
Nick Potkalitsky, Ph.D.
Check out some of my favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: An cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s Mostly Harmless Ideas: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Riccardo Vocca’s The Intelligent Friend: An intriguing examination of the diverse ways AI is transforming our lives and the world around us.
Jason Gulya’s The AI Edventure: An important exploration of cutting edge innovations in AI-responsive curriculum and pedagogy.
Nick, You know i'm quite skeptical of analogizing what LLMs do with words to how humans use words. The superficial resemblance between computational neural networks and human brains, along with the astonishing leap forward in transformer-based AI models capacity to emulate conversation, have scrambled our understanding of how they actually work.
I'm with Terry Underwood, and others like John Warner, who follow the traditions going back to Emerson and Montaigne of focusing on the process of writing, not its outputs. There are a lot of interesting debates about how language processing works, but I am convinced that humans do not use word vectors to speak or write and that vectors are fundamental to how LLMs produce words. That distinction seems important.
Thanks for this discussion and happy to discover your newsletter as I struggle with how to approach LLMs in my teaching! Just a thought to throw in, in my field’s jargon as a linguist anthropologist, what you’re calling referentiality in the sense of word-world links is a sub-species of semiosis/referentiality: indexicality (or a sign linked to its object through a relationship of temporal/spatial contiguity like smoke-fire, or deictic words. I mention it because from this perspective, in asking what language is, the grounding problem is flipped, so the emergence of language in humans (and potentially AI) is more an “ungrounding” problem: How does a toddler, for example, go from using iconic/indexical signs (or signs grounded in the here and now) to ungrounded signs (words or ‘symbols’ in semiotic jargon)? The challenge is to have an agent with enough bandwidth to hold entire networks of formerly grounded sign-object relations in its ungrounded ‘mind’ (whether human or nonhuman). That is why very few (that we know of) animals can break into full-blown ungrounded symbolic communication like humans although grounded semiosis is of course rampant across species. Sorry this was long-winded! Anyways, looking forward to following the newsletter!