19 Comments
May 6·edited May 15Liked by Nick Potkalitsky

Thanks for the shoutout, Nick!

I feel that the current iteration of ChatGPT's "memory" feature isn't immediately useful. By default, it mainly retains dry "facts" about you (what you do, your name, your kids, etc.). You can of course force-feed it some information proactively and tell it to remember a bunch of bullet points or any other details you explicitly outline.

But I feel like what would make "memory" live up to the promise of a personal assistant is if ChatGPT could start picking up more subtle cues based on interactions. So if I e.g. ask for 10 ideas in a brainstorming session and then tell ChatGPT to go ahead with one of them, I'd like ChatGPT to draw a soft conclusion from this (what made the idea different from the other 9, and what does it say about me and my preferences) and commit that interpretation to "memory" (e.g. "Daniel prefers quick, actionable ideas instead of long-term projects.").

That way, "memory" wouldn't just be a glorified remix of "Custom Instructions" but something that feels more organic. Maybe that's coming at some stage. We'll have to see!

Expand full comment
author

I sense you are right, Daniel. “Memory” right now feels like a complicating, non-directed factor that can potentially cause problems when prompting. At least we know how GPT reacts without this function, but now we have to contend with this new memory/attention filter and have to learn to prompt around it. This is my same issue with building GPTs inside the ChatGPT interface. You always seem to pick up some added unpredictability. Just give me straight up GPT and prompt on the functions I want. That is how I feel right now.

Expand full comment

Yeah that's another relevant point for sure. And if you factor in that "Custom Instructions" and "memory" can coexist, it obscures things even further when prompting.

Expand full comment
May 6Liked by Nick Potkalitsky

Good post and an important topic, thanks for raising these issues, people do need to be aware. When ChatGPT came out, hallucinations, and generally inaccuracy, seem to be its Achilles heal. Now it turns out there's one on the other foot as well, increasingly in discussions. It appears that privacy and security are as big or an even bigger issue. Unless they can really be solved, and I mean, even in the more expensive versions, to the satisfaction of IT departments, and the general satisfaction of users, chat, GPT and it's rivals will simply be toys forever.

Expand full comment
May 6Liked by Nick Potkalitsky

And apparently I didn't notice that my iPhone doesn't know the difference between heal and heel. Well, it gives me the opportunity to write, instead of dictate: OpenAI needs to heal the problems with its second Achilles' heel.

Expand full comment
author

Funny!!! Language, math, code all continues to resist systemization. In some sense AI will always be a toy in the face of language’s untraceable complexities. Or so I tell myself! 😊

Expand full comment
founding
May 6Liked by Nick Potkalitsky

Thanks as always Nick, yes some interesting things to consider and I’m interested to see what others think as my, probably naive, default is to get the benefits of such developments without always thinking about the downsides!

Expand full comment
author

Thanks Nick. It is good to hear from you. I hadn’t thought about the security stuff. I was sort of geeking out on the cognition stuff until someone sent me a link about the more serious concerns. Good to stay connected. But as you suggest, sometimes you have to draw a card rule around yourself. We should connect again sometime soon now that the school is winding down in the States. Cheers!

Expand full comment
founding

Sounds great Nick, let me know when might work via nickburnett@me.com

Expand full comment
May 7Liked by Nick Potkalitsky

Nick, this is a great post and I’m honored that my podcast episode with Professor Kambhampati was useful for you. I agree with Daniel above that ChatGPT’s current memory function doesn’t feel obviously useful, and at the same time can see systems like Shortwave’s (which may change, given time and model improvements) being an interesting version of this: https://www.shortwave.com/blog/deep-dive-into-worlds-smartest-email-ai/

Depending on how far you’re willing to stretch the idea of memory as an organic process, external data storage could deliver something interesting. While it isn’t exactly the same, you might also find Blum’s Conscious Turing Machine of interest.

To your point on privacy, it is a familiar and worrying one. Ben Thompson would frequently say that the value we get out of something like Google is far greater than the nuggets of personal data we sacrifice, and wonder if he’d think the same in response to your question. It feels subtly reminiscent to the lottery paradox in epistemology, though I don’t think the analogy holds up *that* well.

Expand full comment
author

Thanks, Daniel!!! I really appreciate you extending the conversation.

I personally am very interesting in seeing this function or something like this converting into an asset with the right controls in place.

The more you work with these systems, the more you want them to be able to direct their attention to the things that really matter to you as a user.

To me, that is one of the really interesting meta-processes that accompany AI use.

The acceleration of the "fine-tuning" dare I say of the user's zeroing in on the most pressing concerns in a process for more closer analysis and interrogation.

In the long run, that would be one of my hopes for AI as an educational tool, but right now, our students--at least in the secondary environment---don't seem ready for these applications.

And part of that story is just the nature of the current systems.

It feels like we are bringing hammer to a situation that requires tweezer.

Can't wait to follow up your other references. Be well.

Nick

Expand full comment
May 6·edited May 6Liked by Nick Potkalitsky

I like these sorts of AI/Human insights the most. I'm always amazed at how much more there is to learn about how we humans actually work to better understand AI.

Expand full comment
author

You have probably noticed that this is one of my strongest interests in AI too. Reflective mirror!!!

Expand full comment

AI makes a great literary foil to explore AI. That's the crux of Paradox my first novel.

Expand full comment

Great article as always, Nick!

I like Daniel's suggestion that -- to live up to the promise of a personal assistant, ChatGPT needs to pick up on subtle cues. I think that sort of processing requires more than a memory store though -- I think it would require selective attention too. It needs the ability to attend to the important things while ignoring the irrelevant things. This seems possible in theory. I assume, this sort of model would need to be trained not just on general data, but also on specific responses and feedback from the users it aims to assist?

Expand full comment
author

Thanks Suzi. It seems like we are still in the eras of trade offs. Certainly one could amp these attention mechanisms, but would that compromise other functionalities. As I said in another comment, I’d rather have raw generativity at this point that I get to direct as a user. So much is already invisible… this memory stuff is just too cloudy right now. But who knows? ChatGPT5 might already have a superior version of this functionality embedded.

Expand full comment
May 6Liked by Nick Potkalitsky

Thanks for the mention, Nick!

Expand full comment
May 6Liked by Nick Potkalitsky

Got to admit, this is one of my favorite pieces of yours!

Expand full comment
author

Thanks Michael. This one came out very smoothly. Love the mix of theory and practicality. A sort of archetype of a new method! Thanks for the support.

Expand full comment