Discussion about this post

User's avatar
Rob Nelson's avatar

Nick, You know i'm quite skeptical of analogizing what LLMs do with words to how humans use words. The superficial resemblance between computational neural networks and human brains, along with the astonishing leap forward in transformer-based AI models capacity to emulate conversation, have scrambled our understanding of how they actually work.

I'm with Terry Underwood, and others like John Warner, who follow the traditions going back to Emerson and Montaigne of focusing on the process of writing, not its outputs. There are a lot of interesting debates about how language processing works, but I am convinced that humans do not use word vectors to speak or write and that vectors are fundamental to how LLMs produce words. That distinction seems important.

Expand full comment
Gavin Lamb's avatar

Thanks for this discussion and happy to discover your newsletter as I struggle with how to approach LLMs in my teaching! Just a thought to throw in, in my field’s jargon as a linguist anthropologist, what you’re calling referentiality in the sense of word-world links is a sub-species of semiosis/referentiality: indexicality (or a sign linked to its object through a relationship of temporal/spatial contiguity like smoke-fire, or deictic words. I mention it because from this perspective, in asking what language is, the grounding problem is flipped, so the emergence of language in humans (and potentially AI) is more an “ungrounding” problem: How does a toddler, for example, go from using iconic/indexical signs (or signs grounded in the here and now) to ungrounded signs (words or ‘symbols’ in semiotic jargon)? The challenge is to have an agent with enough bandwidth to hold entire networks of formerly grounded sign-object relations in its ungrounded ‘mind’ (whether human or nonhuman). That is why very few (that we know of) animals can break into full-blown ungrounded symbolic communication like humans although grounded semiosis is of course rampant across species. Sorry this was long-winded! Anyways, looking forward to following the newsletter!

Expand full comment
8 more comments...

No posts