3 Comments

It’s a real problem. We talked about it in class last week. 13 year olds (who use it!) full of bluster and confidence claim they would never do anything a bot tells them to do. Yes, but not everyone is in the same place mentally. AI age education means regular conversations wirh teens about mental health, privacy and safe online habits.

Expand full comment

Yes, we are going to see a lot of problems like this if we don’t act now.

Expand full comment

I straddle the line, perspective-wise; I'm deep into generative AI and want to build chat-based characters for promoting positive views of STEM for middle-school students and guiding them in self-driven learning in the fields on one side and I'm very deeply concerned about emotional safety for children on the other, both personally and as a substitute teacher.

As a developer, my attitude is that distributors of AI products to end users bear full responsibility for harm their products cause, and legal means must be available to quickly and completely hold them to their responsibility. Those clauses denying responsibility in Use Terms should not be allowed. Means of verifying age or certifying guardianship must be provided by the distributor. (That's actually a very broad problem). As a developer and distributor of teen-oriented AI products, I hold that emotional safety is primary in design, implementation and marketing. Certification for child safety by reputable organizations should be available. Publicity and public pressure on distributors should be strong and persistent.

Expand full comment