10 Comments

Nick, You know i'm quite skeptical of analogizing what LLMs do with words to how humans use words. The superficial resemblance between computational neural networks and human brains, along with the astonishing leap forward in transformer-based AI models capacity to emulate conversation, have scrambled our understanding of how they actually work.

I'm with Terry Underwood, and others like John Warner, who follow the traditions going back to Emerson and Montaigne of focusing on the process of writing, not its outputs. There are a lot of interesting debates about how language processing works, but I am convinced that humans do not use word vectors to speak or write and that vectors are fundamental to how LLMs produce words. That distinction seems important.

Expand full comment

Here here. Brains can change their weighting mid-processing. Brain process bi-directionally simultaneously. Brains are connected to complex sensorial apparati in an ever-changing evolving feedback loop. That is the real benchmark!!!

Expand full comment

Thanks for this discussion and happy to discover your newsletter as I struggle with how to approach LLMs in my teaching! Just a thought to throw in, in my field’s jargon as a linguist anthropologist, what you’re calling referentiality in the sense of word-world links is a sub-species of semiosis/referentiality: indexicality (or a sign linked to its object through a relationship of temporal/spatial contiguity like smoke-fire, or deictic words. I mention it because from this perspective, in asking what language is, the grounding problem is flipped, so the emergence of language in humans (and potentially AI) is more an “ungrounding” problem: How does a toddler, for example, go from using iconic/indexical signs (or signs grounded in the here and now) to ungrounded signs (words or ‘symbols’ in semiotic jargon)? The challenge is to have an agent with enough bandwidth to hold entire networks of formerly grounded sign-object relations in its ungrounded ‘mind’ (whether human or nonhuman). That is why very few (that we know of) animals can break into full-blown ungrounded symbolic communication like humans although grounded semiosis is of course rampant across species. Sorry this was long-winded! Anyways, looking forward to following the newsletter!

Expand full comment

I like this a lot, Gavin. This comment felt very good inside my brain. Always happy to get further refinement. I will have to give this some further thought. I like thinking about "un-earthing" of language from thing, to sign, to symbol. More soon!

Expand full comment

Thank you - this arrived just in time for me to study what you've written and then make these arguments to administrators and colleagues in mid January about how to pivot writing instruction in 2025 through our school district. Thank you for your work, Dr. Cumming's work, and the references within this exact piece. Thoroughly helpful!

Expand full comment

I am glad you find this work helpful. I have been thinking about grounding long before the drop of Gen AI. It is one of the most complicated and interesting problems in the study of language and epistemology.

Expand full comment

Learning a lot here, so thankful I tripped across your material. Language is inherently an intermediary, a filter, but it sure would be fascinating if AI can develop symbolic reasoning a la Professor Milliere’s path. The way that AI will ‘see’ and ‘sense’ language’s meanings may be very different than our own somatic context.

Expand full comment

Milliere's approach is fascinating. If a systems does not have true reference, how does referentiality arise? How does language ground itself? It sounds perfectly plausible, although somewhat circular to the realists out there. The talk is fascinating. Check it out.

Expand full comment
Comment deleted
Dec 19
Comment deleted
Expand full comment

Love Ted Hughes. Sounds like we have a lot to talk about. I am glad someone like you is taking up this research direction. This is really where it is at in my opinion. The frontier beyond the present frontier.

Expand full comment
Comment deleted
Dec 19
Comment deleted
Expand full comment

Let's do that. Send me a DM here or on LinkedIn: https://www.linkedin.com/in/nick-potkalitsky-phd-0313ba126/

Expand full comment