6 Comments
author

I know you always speak from wisdom here. I am shooting from the hip here slightly. In all likelihood a mulitprong approach will be required to make progress. I still am a bit disgruntled about the recent failures to pass data privacy protections at the federal level. To me this is the substrate of a potent solution that includes tech and use regulation. You are probably right that the trickle will not impact K-12 much. Probably because it won’t impact the industry very much if we are being honest.

Expand full comment
Aug 29Liked by Nick Potkalitsky

I wonder if the focus on size is truly warranted. I’m of the mind that AI technology will get smaller, more diverse, and niche. Certainly in education.

Also, people build small things that use large models. So where does that fit in?

I like the idea of focusing on use.

Expand full comment
author

Shrinking AI is certainly in the best interests of big business. Cheaper to run. But perhaps users too. In all honestly, I think the big compute gets in the way sometimes. I see a future where we have smaller strategic models built for specific purposes. The GPT builder stuff seems like a good first step--- also training your own bot for specific purposes---but still, we are seeing the results are only so-so on the user end.

Expand full comment
Aug 28·edited Aug 28Liked by Nick Potkalitsky

It is easy to fall into a regulation=no progress mindset. We need to avoid that. It is in the interest of the few, not the many. If AI companies are going to harp on the kinds of dangers they have, rather than the more realistic ones we face, and if they continue to circumvent existing laws on privacy and copyright that everyone else has to play by, and which educational institutions have to follow, then they need a great deal more regulation. Maybe the first lesson students should learn about AI is that many of these companies are bad actors. Ethics means nothing if we don't teach students about the lack of ethics behind so many of the online tools they will be using now and in the future. Teach them to question everything about these companies, their claims, and their products. AI literacy, online literacy, needs to start there.

Expand full comment
author

I hear you, Guy. I feel like you are speaking to the point at large. I am all for regulation, but I want good regulation. I am not sure this bill is such regulation--and then I worry about the aftershocks of poor regulation as it ricochets through our educational systems. This bill seems to address as conception of AI/ that hovers somewhere around summer of 2022. Now more pressing concerns--at least mine--are what these companies are doing with the models. They continue to insist that they must violate our privacy to use their tools, for instance. So the bill proposes as a solution that we hamstring the tools. That just doesn't add up to me. Seems too indirect. I see the logic of the strategy. I just don't think it will be successful.

Expand full comment

Nick, I disagree about the bill. It is far from perfect, but they do need to be held responsible for what they develop like any other companies. The threshold is set high enough that I do not think it will have much effect on university research. I doubt it will put enough strain on companies that are affected by it given their revenues, so I don't think that it will affect K-12. I suspect the vagaries of the market place and public opinion will have much greater impact. So too will their ability to soak up money from military, intelligence, and other security or policing bodies. I think another good question is if corporations are adopting these tools but don't seem to be bothered to train their employees in their use, should public education do that? How is that different from a corporation that offers such low wages and benefits that public programs like Medicaid or food stamps are essentially funding the corporation?

Expand full comment