EdTech and AI: Enhancing Learning or Prioritizing Engagement?
In-Depth Insights from Cerioli and Laurenty: The Future of Child Development in the AI Era
Greetings, Educating AI Readers!
Before I begin, I want to thank my readers who have decided to support my Substack via paid subscriptions. I appreciate this vote of confidence. Your contributions allow me to dedicate more time to research, writing, and building Educating AI's network of contributors, resources, and materials.
If you haven’t noticed, my work has recently taken an ethical turn. As I prepare my syllabi for this coming fall, I have some pressing questions to explore about AI policy, privacy, security, and ethics.
I want to commend my fellow Substackers who are actively working on building AI-responsive curricula, folks like
, , , , , , , , , , , and . I will pick up the torch again in a couple of weeks when I return to my series on the gaps and challenges AI has revealed in contemporary education.But in this article, I want to explore an important study just published by Mathilde Cerioli (mathilde@everyone.ai) and Olga Muss Laurenty. In “The Future of Child Development in the AI Era: Cross-Disciplinary Perspectives Between AI and Child Development Experts,” the authors offer one of the most significant contributions to the discussion about AI integration and implementation in the K-12 space since the Department of Education’s AI Report in May 2023. I advise everyone to take some time to read this work closely, but in case you lack the time, I will do my best to summarize its most important findings.
Study Aims to Accomplish 2 Things:
Consult with experts in AI, product development, and child development, “to provide a general understanding of how AI is likely to be applied in children’s environments and to anticipate some of the potential benefits and challenges” (2).
Conduct an extensive review of the scientific literature, “to gain insights on implications of growing up in a generalized AI-infused environment” (2).
The authors deliver masterfully on both objectives. I will highlight a few key ideas that are particularly pertinent to my own work:
Childhood Development: Activations vs. Obstacles
In this section, the authors discuss how childhood development involves brain function, environment, and genetics. Our brains contain over 80 billion neurons. Through various processes, our brains make, record, and preserve experiences that are advantageous for short- and long-term development. Some neural systems require external experiences to activate, while others are only assisted by external experiences. The authors argue that “the development and quality of AI applications hold both the potential to support developmental milestones and the risk of hindering them if they detract from crucial environmental interactions” (4).
The authors focus on the screen- or device-bound nature of most AI applications as a potential obstacle. They cite research that “high screen use and limited outdoor activity are linked to rising myopia rates among children” (5). “Screen usage has been linked to disrupted sleep patterns, reducing both the quantity and quality of sleep.” “Prolonged screen time contributes to increased sedentary behavior, raising the risk of cardiovascular issues later in life.”
“The impact of screens varies among different populations, with both more adverse as well as positive outcomes observed in individuals with neurodiversity.” “Technoference, which relates to how technology interferes in human relationships, is reshaping parent-child interactions and constitutes a growing area of study as parents themselves express conflict over their screen usage around their children.”
In summary, AI's embedding in screen-reliant technology will reinforce negative trends associated with excessive screen time.
Infancy, Early Childhood, and AI-Tutoring Systems
The authors provide some of the most penetrating insights I have read about AI integration into infancy and early childhood.
“Babies and toddlers…seldom learn from digital media exposure, a phenomenon known as the video-deficit effect…This concept is crucial when developing solutions for children under 3 years old, where increased interactivity through touch screens does not always translate into learning” (10). What actually helps consolidate optimal learning? “The responsiveness of a caregiver to their actions.”
For young persons ages 3 to 5, a key concept is “dual representation.” For children “to fully understand and learn symbols…they must simultaneously hold two mental representations: the symbol as a physical object in its own right and the symbol as a representation of something beyond itself” (11). Optimal learning requires engagement with physical materials!
Regarding AI for language instruction in young persons, the authors warn about the “overreliance on unproven methods that replace crucial language development activities” such as “rich social interaction” (11). They also caution against “reduced engagement with physical objects and rich sensory experiences,” noting that “longitudinal studies have reported associations between prolonged screen time and decreased sensory integration and fine motor skills” (11).
Children’s Understanding of Privacy
In this section, the authors highlight the complexities of privacy for children.
“The notion of privacy is a perfect illustration of the difficulty for young people to make informed decisions considering the complexity of the implications and abstraction skills they require” (11).
“Children under 7 lack the abstract understanding of concepts like ‘privacy’ and ‘safety’” (11).
“While by age 11 their comprehension improves, they still lack judgment in applying these concepts to practical situations.”
“Even during adolescence, their decisions often prioritize immediate gratification over the consideration of uncertain risks in the future, despite a more mature understanding of those concepts” (11-12).
In summary, most children are too young to fully grasp how privacy works with artificial intelligence (AI).
Attention-Capturing Algorithms
The authors express concern over the impact of attention-capturing algorithms in AI-infused applications.
“Higher use of [AI-infused] mobile technology could generate higher need for instant gratification” (12). “Researchers have found that after a 3-month exposure to smartphones, non-users become more immediacy-oriented in a delay discounting measure suggesting that heavy smartphone usage can causally reduce an individual’s capacity to delay gratification.”
The authors worry that as AI-infused applications become more capable of extending conversations through attention-capturing mechanisms, these tools will contribute to the erosion of executive function already in progress in middle school and high schools as a result of smartphones. “Executive Functioning (EF) is a set of cognitive skills including inhibitory control, working memory, and flexibility” (13). “Diminished executive functioning, particularly through increased media multitasking among adolescents, is associated with lower scores on standardized tests measuring academic performance in English and math” (13).
In summary, children are particularly vulnerable to attention-grabbing algorithms. The rise of GPT-era AI will likely empower tech designers to create applications even more adept at capturing individual interests.
AI Integration: Engagement vs. Learning
In this section, the authors discuss the balance between engagement and learning in AI integration.
“The field of EdTech remains largely unregulated, allowing products to enter the market without mandatory evaluation to prove their efficacy” (18). The authors observe that there is a “persistence of low-quality educational apps in the market” (19). “The limited efficacy of some EdTech solutions can be attributed to their design limitations, specifically the risk of prioritizing engagement over educational value…(time spent on the app vs. the amount of knowledge acquired).” “Though personalized learning can lead to improved learning, up to this point, scientists underscore the insufficient profound understanding of the underlying pedagogies and the learning process.”
In the context of gamified AI-infused applications, EdTech designers are using gamified elements to motivate engagement. “The issue faced by gamification is that it creates intrinsic motivation to practice the exercise by making it more appealing, which then moves the motivation toward being extrinsic, since it is no longer about learning. This means students engage for the rewards, further displacing intrinsic motivation away from the learning” (19).
The authors add that “while monitoring students' engagement can enhance learning, it also poses significant risks that must be carefully considered…Though monitoring can lead to more engagement and prosocial behaviors, it comes at the cost of increased anxiety and stress.” The authors conclude with a larger question, worthy of broader consideration: “How might excessive monitoring stifle crucial aspects of identity formation such as developing one’s sense of trustworthiness and competence?” (20).
In summary, effective AI integration in learning systems requires incorporating explicit pedagogies, well-defined learning goals and outcomes, and established methods for knowledge delivery alongside engagement strategies.
Recommendations
The authors conclude with a series of recommendations to policymakers, school leaders, teachers, and tech designers. The list is exceptional in its quality and depth. I will highlight the most important here as objects for further inquiry and discussion:
Governments:
Define what constitutes appropriate ages and usages of AI agents by a multi-expertise committee.
Extend guidelines on data collection from minors from 13 to 15.
Regulate digital products for young users with testing and regulations as stringent as for physical products.
Tech Developers:
Prioritize and improve parental control features.
Implement robust age-appropriate filters.
Design AI tools to adapt to children’s ages.
Prove the efficacy of AI applications when they sell the merit of their solutions.
Educators:
Integrate AI Literacy into teachers’ training curriculum.
Create age-specific guidelines and recommendations per age category.
In summary, Ed-Tech is largely an unregulated industry; parents desperately need help when it comes to monitoring use and content; educators urgently need training on how to integrate AI safety into classrooms.
Conclusion
The insights provided by Mathilde Neugnot-Cerioli and Olga Muss Laurenty in their study highlight the substantial work needed to safely integrate AI into education. While there is an urgency to catch up with the rapid advancements in AI and put these tools in front of students, we must be cautious. In our rush to mainstream AI in classrooms and address concerns like plagiarism, we risk creating other significant problems—attention capture, engagement versus learning, privacy issues, monitoring leading to stress, and increased screen time.
Our true urgency should be a rush towards developing good policies and effective implementation strategies. If it takes time to get it right, then it takes time. It's crucial that we prioritize the well-being and development of our students over the allure of rapid technological integration. By focusing on thoughtful, ethical approaches, we can ensure that AI serves as a valuable tool in education, enhancing learning while safeguarding the essential human aspects of child development.
Nick Potkalitsky, Ph.D.
Check out some of my favorite Substacks:
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: An cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s Mostly Harmless Ideas: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Riccardo Vocca’s The Intelligent Friend: An intriguing examination of the diverse ways AI is transforming our lives and the world around us.
I have been using Quizizz for three school years, a learning platform that allows you to create very versatile digital teaching materials, slide presentations that help to explain concepts, videos, interspersed with quizzes and questions that allow students to consolidate and at the same time verify learning. At the end of the lesson, always very participated and active, I reward the first three with a sticker. Engagement is very strong, also because I propose the same lesson to review at home. But reading the article I was struck by the passage about the criticity of gamification, which actually made me rethink the attitude that some students have during the game lesson: the commitment is often motivated above all by the perspective of arriving first and winning the prize, while learning itself is secondary.
Indeed, gamification cannot be the only learning strategy, it must be accompanied by slower events that promote reflection and in-depth study, as well as cooperative work.
Nice piece, Nick. One of the elements that ties some of the analysis together is how focused so much ed-tech is on individualized experiences. It seems to me the potential for generative AI is in creating social experiences that involve teams of students guided by teachers tackling some problem or situation embedded in an LLM. The focus on the screen is displaced by the human interaction that surrounds it. Young children focus on a caregiver to understand the symbols and contextualize them. The engagement loops are not mediated by algorithms but guided by humans.