Discussion about this post

User's avatar
Michael Spencer's avatar

It's not that they don't know it's that they created this. Reasoning models have higher hallucination rates, many things are geared to increase the demand for compute artificially to extend the bull market. Nothing is based upon trust alignment or safety. Companies aren't even liable for the deaths or human harms they are causing and ChatGPT it's a pretty good example. The descaling risks on academics including teachers and professors is also real.

Expand full comment
Saty Chary's avatar

Hi Nick, nice article!

And, Amen. Here's what I posted on LinkedIn, as a comment:

****

Won't be easy, but it's good to identify the source of the issue.

The new problem would be this: to decide 'when to hold and when to fold'. That's inherently unsolvable if it all continues to be based on tokens and embeddings. IOW, not having ground truth, the AI cannot know what it knows and what it doesn't, so, can't decide when to not hallucinate. Users adding a certainty threshold to their prompt [no good way to come up with a useful value, other than 1.0!], the model outputting results along with explicit certainty, or formulating a behavior-calibrated response - none of these address the core issue that it's ALL based on training data.

****

Expand full comment

No posts