This is a great article that should be the standard for all PhD candidates. It clearly defines the advanced disciplinary expertise required to engage with AI as a collaborator.
For K-16 education, the core focus must be critical thinking instruction and a full understanding of what this article points out so well regarding AI's limitations. The educational approach must involve scaffolding these concepts at every level and at every step of interaction. This allows students to use their developing critical skills to constantly evaluate the AI's output, thus promoting authentic collaboration with AI throughout their learning journey.
Thank you for your continued insights into how we must consider all the variables when teaching young, and older, minds to work with AI .
Yes! Students must be able to evaluate AI either by their own domain expertise/knowledge or by an established process of critically evaluating outputs. This is such a critical missing step in digital literacy ed… great post!
I have been thinking on something like what you describe as the “knowledge asymmetry problem”, as the way in which AI widening the digital divide. Student A may experience a cognitive gain, while Student B accumulates a cognitive debt. However, even though I agree on the benefits of having a good background in disciplinary knowledge, I am not convinced that it is what makes the big difference between both cases, as I guess Students A and B arrive to use AI already with different levels of cognitive development that shape how effectively they can use it in the first place.
So, regarding the implications, I do think AI literacy is domain-independent, and that it gets tailored to the specifics of each disciplinary context. In that sense, AI literacy may not be developable outside of disciplinary practice itself, but you do not start developing it on a new disciplinary field from a blank slate. I also find unrealistic to delay the use of AI by students until after they have developed a basic disciplinary grounding, so that student B transforms into student A before using AI; we have to solve the problem along the way, and teacher's expertise will be crucial here, but not only because of their disciplinary knowledge and skills.
Couldn't agree more. This resonates so much with what you've written before about critical thinking as a prerequsite for engaging with AI effectively. It makes me wonder how we can design AI tools, or even curricula, to mitigate this dependency for students starting from scratch. Is it about prompt engineering for learners, or something deeper in pedagogy?
This is a great article that should be the standard for all PhD candidates. It clearly defines the advanced disciplinary expertise required to engage with AI as a collaborator.
For K-16 education, the core focus must be critical thinking instruction and a full understanding of what this article points out so well regarding AI's limitations. The educational approach must involve scaffolding these concepts at every level and at every step of interaction. This allows students to use their developing critical skills to constantly evaluate the AI's output, thus promoting authentic collaboration with AI throughout their learning journey.
Thank you for your continued insights into how we must consider all the variables when teaching young, and older, minds to work with AI .
Yes! Students must be able to evaluate AI either by their own domain expertise/knowledge or by an established process of critically evaluating outputs. This is such a critical missing step in digital literacy ed… great post!
Or just have some balls and carry on doing your own reading and dont use it. https://open.substack.com/pub/mdsauthor/p/judgment-day?utm_source=share&utm_medium=android&r=ju1iq
Student B is a current student in the public school system. The AI in this case is the Teacher. Student B describes current Pedagogy.
Student A is a unicorn. she doesn't exist except in the type who absolutly geeks out about a topic. These are your gifted and talented.
I have been thinking on something like what you describe as the “knowledge asymmetry problem”, as the way in which AI widening the digital divide. Student A may experience a cognitive gain, while Student B accumulates a cognitive debt. However, even though I agree on the benefits of having a good background in disciplinary knowledge, I am not convinced that it is what makes the big difference between both cases, as I guess Students A and B arrive to use AI already with different levels of cognitive development that shape how effectively they can use it in the first place.
So, regarding the implications, I do think AI literacy is domain-independent, and that it gets tailored to the specifics of each disciplinary context. In that sense, AI literacy may not be developable outside of disciplinary practice itself, but you do not start developing it on a new disciplinary field from a blank slate. I also find unrealistic to delay the use of AI by students until after they have developed a basic disciplinary grounding, so that student B transforms into student A before using AI; we have to solve the problem along the way, and teacher's expertise will be crucial here, but not only because of their disciplinary knowledge and skills.
Couldn't agree more. This resonates so much with what you've written before about critical thinking as a prerequsite for engaging with AI effectively. It makes me wonder how we can design AI tools, or even curricula, to mitigate this dependency for students starting from scratch. Is it about prompt engineering for learners, or something deeper in pedagogy?