12 Comments

I'm curious as to how Siri will maintain the ethical age limits on AI use.

Expand full comment
author

It won’t. Those age limits went up with a puff of smoke. Unenforceable. All about optics. Appearing to be ethical. Etc.

Expand full comment

You echo my thoughts and deep concerns. Maybe you'd like to explore this piece of ethical skulduggery in a future article. Thinking about where that leaves educators on the DoC landscape in terms of using AI tools in the classroom. Open AI, and Microsoft (and Claude I think) all still have the 18 yo as age limit for holding accounts, and the parental guardianship for 13-18 yos access. According to Open SI, they won't allow their tech to be used outside of these rules, but I'm not convinced Snapchat or Khan Academy pay much heed. I'd be very interested to know where we might be heading in this regard. And, as you say, is the age testing access enforceable in any case?

Expand full comment
author

Hey, Dr. Kevin,

I’d be excited to work on this piece.

We need to establish

What the age limits are.

What their legal status is.

What processes exist for circumventing them (parental permissions)

How do they apply in the case of embedded AI tech (at level of API: Khanmigo, PowerTools, Siri, etc)

We need to contextualize in light of a long history of embedded AI in video games, social media, search engines.

We need to emphasize… what has changed. A new level of data collection empowered by GPT level AI that in previous models have been shuttered through age limits.

We are putting together the pieces for our audiences. Helping them make their own conclusions.

I would for this piece to take a shot at answering the question how to we onboard our K-12 students safely to AI systems in fall.

Emerging picture—-need to buy into a product that offers protecting at level of API—-Khanmigo $30 per student, PowerNotes $99 per students. Is safe AI equitable, accessible?

Unless you find another way to get kids safe access. Personally, I just don’t trust kids inside the big commercial models.

Expand full comment
Jun 13Liked by Nick Potkalitsky

Nick, I like this piece. Just a few quick note for now. Last night I was reading Yanis Varoufakis' book, Techno Feudalism. I am struck by the parallels to what I have read so far and points you make in this piece. I think they are complementary, but, as I'm only about a third of the way into the book, will not say more for the moment.

Another thing that struck me is the range different reactions of people who watched the WWDC announcements. While I recognize that the ChatGPT portion is important, while watching it, it actually struck me as not that important. Actually, given what was announced, the AI portions were almost anticlimactic. The thing I am actually most excited about was the earlier announcement that I could access everything on my phone from my Mac. Perhaps because of what I am doing at work this month, I was preoccupied by all of the security and privacy details, which include the ChatGPT connections, but did not particularly pick the part about OpenAI out from the rest.

I am struggling with the idea of agents and agency right now. That's why I haven't commented on your last two posts. I do agree that they are tools. I get what you are saying about Heidegger, though I dislike trying to work through his jargon and personally despise him, I think what you have written is important. At the same time, a lot of my views on agency are changing, largely as a result of what Amitav Ghosh has expounded in his last two non-fiction works (The Nutmeg's Curse a couple of years ago, and more recently, Smoke and Ashes), and reading about what science is learning about agency in plants (most recently, Zoë Schlanger's The Light Eaters). I believe I am starting to reconsider the agency of some of our tools historically, from fire to language to watches, but recognize that what I am talking about played out over much longer periods of time. Sorry if that does not make any sense for now; it is too inchoate in my mind.

Anyway, this is good. There is a lot to chew on in your last few posts. I look forward to seeing where you go with all this.

Expand full comment
author

Thanks, Guy, for checking in. Yes, Heidegger is awful. I feel like I had to write my way through him as a foundational text, and now my schema has little to do with little to do with his approach. I have kind of inverted his approach and abandoned the messy terminology in the process. Win win.

Agency is a rich heady topic. Through the 2010s in tandem with a deep dive into object-oriented ontology and actor-network theory, I held onto a very expansive conception of agency. I love Ghosh by the way (It is interesting to see literary authors picking up on these theoretical movements a decade later). There is still a part of me that holds onto that expansvie, but now in my Substack role as an advisor particularly to teachers, I am simplifying some of my grad school notions in order to provide a pragmatic framework for moving forward in a context where the goalposts are getting constantly moved and reoriented.

My framework is reductionistic, and if my "likes" are indicative of anything, it is perhaps not that popular, but I don't see anyone out there doing this work. I figured I'd take a few posts and try to put something together before moving onto a more proactive, praxis-oriented series in tandem with a hybrid training I am building. Thanks for sticking with this articles. Really appreciate your feedback.

Expand full comment
Jun 14Liked by Nick Potkalitsky

You mention the likes not being too high on these posts. In my conversations with people, university professors, instructional designers, developers, and technologists, and other highly educated people, I do not find much of a thirst for understanding AI in this way. There are plenty of exceptions, but the concerns and the interest are mostly on a practical or operational level. That is as true of ethics as of other aspects of AI.

Two things that might play into it are the slipperiness of agency and the lack of depth of understanding of the history of technology. I realize that it is probably possible to give a philosophical definition of agency, but in practice it is very difficult. In my case, I really didn't give it much thought until I read William Gibson's novel of that name and realized how much of my reading over the years had touched on it. I mentioned Amitav Gosh's recent works, where he asks if plants can have agency in some sense, perhaps at the species level. He really stretches the idea to the breaking point. Zoë Schlanger is asking the same question at the level of individual plants and is careful to frame it as different from the kinds of agency animals have. She is clearly struggling to not anthropomorphize the concept. I am not certain that I am reading you correctly. I think you are saying that AI may have a type of agency, but that it is important to understand that in mechanistic terms and not to surrender our own agency to the tool. That may just be a projection of my own thoughts onto what you are writing.

I also believe that everyone needs a good background into the history of technology. I did not go through a History of Technology program, but the history of technology was tied into my graduate research and became even more important to me after I switched careers in the mid-1990s. Technology has always affected our own agency, constraining or empowering it. I am trying to figure out when the first glimmerings of agency-like things began to appear in technology. I also think that if we could move beyond the obvious parallels to AI like the introduction of calculators, we would have better perspectives. I have focused on its likeness to horology for a year or more, and am also exploring the possibilities of the extremely elaborate fire control systems that began to be deployed on warships just before WWI.

Your work in regard to agency and AI is appreciated by at least some of your readers.

Expand full comment
author

Horology, huh? I'd love to read something about that. Thanks for your kind words. I think it is one thing to read articles that criticize OpenAI; it is another thing to read articles that highlight how human participate in the iterative processes that strip their own agencies. That can be quite a bummer! Ha!

Expand full comment
Jun 13·edited Jun 13Liked by Nick Potkalitsky

I really appreciated this issue for the scientific and behavioral analysis of the impact of Apple Intelligence. I also take this opportunity to thank you for recommending The Intelligent Friend. Also, I'm really intrigued by the way you described it, I'll probably think about changing the newsletter's Substack bio 🤣.

Expand full comment
author

No problems, Riccardo. I really like the energy you bring to the community, and your latest post with Alejandro is amazing. Keep leaning into your personality and your humor as you write and craft.

Expand full comment
author

Thanks , Amrita, for the restack!!!

Expand full comment

Now you’ve really got me thinking! Keeping humans in the drivers seat, versus giving up the controls. Still, there’s no putting the genie back in the bottle it seems. Having toggle switches and being very conscious and aware of what is being decided for us seems like unnecessary Authority to maintain. Much more reflect on this! Thank you for articulately sharing your thoughts on this matter.

Expand full comment