Yup, the "brain neurons = neural networks" is exactly the mental model I always fall back on as a shortcut. (Unspririsingly, since that's basically the analogy most commonly used.) I also never stopped to consider that there might not be a feedback loop in a neural networks.
Thanks, Daniel. The language choice on the part of programmers invited the conceptual collapse. Part of the work of this series is to reopen the space and beauty in between the two disciplines. But I couldn’t have done that without Alejandro’s assistance.
Not on Substack, at least yet. I made some rants on Twitter about the many issues with stretching the biological analogy for artificial neural networks too far. I'll probably revisit the topic in more nuance in the near future.
Suzi, great idea. I think Tse in his interview intentionally underplays the analogical connections between back propagation algorithm and cortical feedback loops. Maybe you can pick up this thread in your stack at some point. The neuroscience at that level is extending well beyond my expertise.
Yes, I agree. I'd love to read a deep-dive on the difference between backpropagation algorithms and cortical feedback loops. That could be a fascinating article.
Whether it's a boon or a challenge, the structure of AI's neurons and human neurons are fundamentally different. Whatever it is, the future with AI holds tremendous promise and excitement.
Finally catching up on my AI Substacks this afternoon. What a pleasure reading this crisp overview. Looking forward to the next two parts. Thanks, Nick and Alejandro!
Thanks, Suzi! We pushed each other on this one. It was a real treat to think alongside Alejandro. What a brain!
Yes indeed!
Yup, the "brain neurons = neural networks" is exactly the mental model I always fall back on as a shortcut. (Unspririsingly, since that's basically the analogy most commonly used.) I also never stopped to consider that there might not be a feedback loop in a neural networks.
Love to see you dig deeper here. Thanks!
Looking forward to the next chapter.
Thanks, Daniel. The language choice on the part of programmers invited the conceptual collapse. Part of the work of this series is to reopen the space and beauty in between the two disciplines. But I couldn’t have done that without Alejandro’s assistance.
That's why I like to call them "differentiable tensor circuits" which is way less fancy but far more accurate.
Have you written anything about this? I'd love to read your arguments on this point.
Not on Substack, at least yet. I made some rants on Twitter about the many issues with stretching the biological analogy for artificial neural networks too far. I'll probably revisit the topic in more nuance in the near future.
There, I found it
https://x.com/alepiad/status/1343752866616537088?s=20
Another
https://x.com/alepiad/status/1342094375820668930?s=20
Suzi, great idea. I think Tse in his interview intentionally underplays the analogical connections between back propagation algorithm and cortical feedback loops. Maybe you can pick up this thread in your stack at some point. The neuroscience at that level is extending well beyond my expertise.
goes to jot down some ideas...
If anyone can write that post, you can...
Yes, I agree. I'd love to read a deep-dive on the difference between backpropagation algorithms and cortical feedback loops. That could be a fascinating article.
Whether it's a boon or a challenge, the structure of AI's neurons and human neurons are fundamentally different. Whatever it is, the future with AI holds tremendous promise and excitement.
When we recognize the foundational differences, we can start to discern the real similarities, no?
Finally catching up on my AI Substacks this afternoon. What a pleasure reading this crisp overview. Looking forward to the next two parts. Thanks, Nick and Alejandro!
Thank you!
Another fascinating article, Nick and Alejandro! You both have a knack for raising the most interesting of ideas.