9 Comments
Oct 31, 2023Liked by Nick Potkalitsky

Profound reflection on a sensitive and complex topic. Great job, Nick!

Expand full comment
author

Thanks alot, Nat! I tried to model the depth and clarity on your writing style.

Expand full comment
Oct 31, 2023Liked by Nick Potkalitsky

Thanks for your kind words. You’re a very talented and great writer who knows how to capture reader’s attention 🎉🥂

Expand full comment

Very detailed and delineated discussion of how Humans view AI and why. Andrew Smith and I poked at this a bit earlier this summer with "Beware the Binary - Avoiding Absurd AI Assumptions" where we compared the Doomsayer with the Utopist (Andreseen falls into this category) Yet neither view is right, nor particularly helpful.

https://www.polymathicbeing.com/p/beware-the-binary

Expand full comment
author

Wow! This is an amazing post! I knew others had to be tapped into these dynamics. I am glad you shared this information.

Expand full comment

There are a few of us who have been poking at that this past year for certain.

In fact, I wrote a novel this summer that exploits the space in between the Doomsayer and the Utopist.

https://www.amazon.com/Paradox-Book-One-Singularity-Chronicles-ebook/dp/B0C7NBZX89/

Expand full comment
Oct 31, 2023Liked by Nick Potkalitsky

Some interesting ideas here Nick. One thing stood out for me is that in your three positions you don't seem to include those who are highly skeptical of the idea that AGI is close or even those who doubt that it is even possible. Your thesis stems from the idea it's coming soon. People have been claiming it is just "around the corner" for quite some time. I could argue that any talk of hypothetical AGI dangers is a distraction from the real risks inherent with the current crop of Gen AI tools. What is really happening with these Gen AI tools? There is a difference between what academics in the field say and what CEOs choose to amplify. Hype can be a weapon to influence and gain power over shareholders, investors, politicians and the general public.

Expand full comment
author

Great insights, Boodsy! You are right. My wording does ellide over the framework/orientation you describe. I'd like to say that Techno-realism include this skepticism with its "caution" towards claims about AGI, but that phrasing doesn't go far enough.

It sounds like we stand somewhat closely together when it comes to the threat and danger of hype. I have written much stronger versions of my AI Culture War sections, but tempered through re-reading and editing as I practiced the skilled of reflection and metacogntion.

Where is this all heading: As a teacher, I think we need to educate students about these different perspectives -- their strengths and weaknesses.

AI is not a static thing. It is always being interpreted. Students can choose to have different positions on AI. Those positions allow them to see different values in these technologies, while they also make certain risks invisible or less important.

The overarching goal of an AI literacy program is to become a better consumer of one's AI viewpoints.

I highly appreciate the skeptical posture on AGI because of how it emphasize the value of human-centric planning and development. Keep carrying that torch! Mine is certainly lit!

Expand full comment
author

Thanks for restacking, @Nat! You rock!

Expand full comment