Discussion about this post

User's avatar
Daniel Nest's avatar

Thanks for the shoutout, Nick!

I feel that the current iteration of ChatGPT's "memory" feature isn't immediately useful. By default, it mainly retains dry "facts" about you (what you do, your name, your kids, etc.). You can of course force-feed it some information proactively and tell it to remember a bunch of bullet points or any other details you explicitly outline.

But I feel like what would make "memory" live up to the promise of a personal assistant is if ChatGPT could start picking up more subtle cues based on interactions. So if I e.g. ask for 10 ideas in a brainstorming session and then tell ChatGPT to go ahead with one of them, I'd like ChatGPT to draw a soft conclusion from this (what made the idea different from the other 9, and what does it say about me and my preferences) and commit that interpretation to "memory" (e.g. "Daniel prefers quick, actionable ideas instead of long-term projects.").

That way, "memory" wouldn't just be a glorified remix of "Custom Instructions" but something that feels more organic. Maybe that's coming at some stage. We'll have to see!

Expand full comment
Guy Wilson's avatar

Good post and an important topic, thanks for raising these issues, people do need to be aware. When ChatGPT came out, hallucinations, and generally inaccuracy, seem to be its Achilles heal. Now it turns out there's one on the other foot as well, increasingly in discussions. It appears that privacy and security are as big or an even bigger issue. Unless they can really be solved, and I mean, even in the more expensive versions, to the satisfaction of IT departments, and the general satisfaction of users, chat, GPT and it's rivals will simply be toys forever.

Expand full comment
17 more comments...

No posts