5 Comments

Here is a great comment from Terry Underwood that he left in Substack Notes. Recording here for posterity: I’d love to hear Mike and Alejandro discuss this issue. Great work on this piece, Nick; nice demonstration of phronesis in your decision not to go all prophet on the future of AI.

When I think “I know how this situation ends and this is what I’m going to do,” I’m reasoning not just from past training and experience but from immediate cognizing of what is likely to occur in the future based on the present. EVs as AI can accomplish this feat but that’s it, and they are fifty grand a pop. LLMs cannot cognize the deadline of the present unless I tell it how I see things right now. And even then there is a time lag.

I know how this plays out and act on it. I’m going to eat this apple. Mike deals with this gap between AI and humans pointing to this tacit knowledge of apples. AI cannot make these decisions and never will because AI simply is not human but a machine. It can’t eat apples. Weighing and describing the apple is AI. Alejandro uses tacit knowledge in his human existence, also, but that’s not his concern re: the AI gap. He’s looking explicitly at formal reasoning processes involving abstract symbols that follow unambiguous rules. Mike and Alejandro are apples and oranges. Mike takes a bite of the apple because he’s hungry. Alejandro carefully peels the orange, separates the slices, analyzes the chemical components—not to eat the orange, but to explain how the orange functions when it is eaten. Alejandro is into computational thinking.

You and by extension Mike are calling "tacit knowledge" what a philosopher might call "embedded and embodied understanding." This kind of knowledge isn't abstract or purely reasonable. It’s knowledge in action, of consequence in physical reality. A chef knows when dough feels right. A carpenter knows the behavior of different woods. A parent knows their child's cry.

AI judgments, in contrast, are essentially, always abstract even when it discusses flies laying eggs in apples. AI can process patterns in training data and now process patterns in patterns in training data, but it can’t judge as humans can because it can’t suffer the consequences.

AI can’t write because it has no audience. It generates output that looks like text but isn’t the same.

AI has no skin in the game. Humans have had skin in the game for millions of years. Computer science hasn’t had a thing to do with designing the brain. In a non-trivial way our brains had a hand or two and some eyes, ears, etc. in their design.

This creates an inherent AI limitation. My judgments, however sophisticated or flawed they might appear, are always anchored in physical reality where actual consequences unfold. AI can analyze cooking temperatures and their effects on a potato, but it can never truly know what leather and ash tastes like. AI can “discuss” emotional responses, but it can never feel fear or joy that might influence its, ahem, judgment.

This suggests that AI systems are better understood as tools for augmenting human judgment (reference materials, heuristics) rather than replacing it. AI can process information and identify patterns, but the final integration with reality, the real knowing that leads to action, that remains fundamentally human.​​​​​​​​​​​​​​​​ And until AI grows a skin with neural loops linked to a natural pain and pleasure palace, for my money it will never do better than emulate human judgment. Just look at Mike and Alejandro. AI isn’t going to settle this. I’d need to listen in real time to these highly skilled and intuitive thinkers. Your post suggests that you would be an excellent moderator of the discussion to help those of us lacking expertise in Toulman analysis, a strategy for reasoning in situations where there is no absolute truth, no right or wrong, and computational analysis, where there is a right answer.

Expand full comment

Happy New Year, Nick!

2025 promises to be fascinating to observe, with o3 likely paving the way for most AI labs playing catchup with test-time compute "thinking" models of their own. We've already seen DeepSeek V3 (Deep Think mode) and Gemini Flash Thinking from Google, and I'm sure we'll see more in the months to come.

Expand full comment

Fancy seeing you here.

Expand full comment

Oh yeah, Nick and I go WAY back...to at least January 2024. That's practically decades! Happy 2025 Mark!

Expand full comment

Such a great essay Nick. I hope that 2025 is the year you start to get recognised even more outside the Education/AI crossover space for your thinking... it really is world class and deserves to be more widely read and acknowledged in the AI community at large.

Even amongst all of the deep technical and philosophical analysis within this essay, this simple sentence was my favourite...

"The goal isn't to replicate human judgment - which may be computationally impossible - but to create productive partnerships between computational thoroughness and human discernment."

To me, that's what almost every meaningful conversation about AI - bit it technical, philosophical or ethical - should be based on. It should be our North Star.

We will see more mind blowing advances in 2025 that are the equally or more significant than o3 but at the end of the day, it all comes back to the question of how we as individuals and as a society can create the productive human/AI partnerships you're referencing.

Here's to an intriguing 2025 ahead Nick!

Expand full comment