Discussion about this post

User's avatar
Jordi Linares's avatar

The Apple paper is a complete shame. It is biased in its title, on purpose because the WWDC failure from Apple to provide any interesting thing on AI. It is not true it proves anything interesting on 'thinking' on LLMs or LRMs. And what it is worse even from those as you talking about the paper: LLMs are not AI, they are part of AI. This simplification, we 'prove' something on LLMs (or we pretend so), and everybody makes the incredible extrapolation to all AI, models, techniques etc. Shame.

Expand full comment
Adam's avatar

Hmmmm... "the ability to experience the AI's agency without being seduced by the illusion of its intelligence" - I actually tend to think of it the other way round! Treating AI as intelligent is not as much of a risky thing to do as treating AI as if it has true agency. For me, current AI clearly shows "intelligence" - just not the same as human intelligence. Agency for me has to do with intrinsic motivations, perspectives, beliefs, goals, experiences - which current LLM-powered AI doesn't truly have. The fact that we currently have to prompt LLMs to give them their "identity" (eg "you are an experienced strategic consultant") shows how they lack true agency. They can act as "agents" in the same sense that we can set up a traditional computer program to act as an agent. In fact current AI agents are called agents because of their ability to use tools and interface with other digital services. The fact that they can take actions does not necessarily give them true "agency" (any computer program can take actions - even a basic thermostat can take actions). "Intelligence" for me is more about the capacity to reason, interpret, act, synthesise, create, take decisions, etc. LLMs can do all those things, even if they sometimes fail (as do humans). Even the tricky word "understanding" I don't really have a problem with in the context of current AI. If I give a chatbot an instruction and it acts as if it has understood it, then I don't have an issue with saying that it has "understood" my instruction. I certainly don't feel I have to ascribe any sort of consciousness, intentionality or sentience to use words like "understood" with AI. When I say "understood" - I mean that it has recognised and interpreted the information (instruction) I have given it and used that information appropriately to influence its outputs. It has extracted meaning from my prompt. I feel that "understood" is a reasonable shorthand for this process, without speculating too much about what sort of "world models" or other internal representations the AI may or may not have constructed. If we are really scared about conflating AI with human thought, then we could invent entirely new words. We could resolve to never say AI has "understood" - only that it has "grokked" the information, or something like that. We could avoid saying the model is "thinking" and instead stick to "processing".

As for how we should interact with chatbots - since they are trained to work best with natural human conversation styles, users will usually get best results by interacting in natural human language. Just as long as we remember that they are not human, and they don't "think" in the same way we do. It's important for us to learn where AI "thinking" is flawed, just as we need to learn where our fellow humans' thinking is flawed.

Thanks for your thought-provoking piece, Nick, as always!

Expand full comment
7 more comments...

No posts