The Wicked Problem of AI in Education: A New Framework for Educators
Reframing the Challenge: A Problem to Solve vs. a Condition to Navigate
Thomas Corbin, Margaret Bearman, David Boud, and Phillip Dawson have published "The wicked problem of AI and assessment" in Assessment & Evaluation in Higher Education, fundamentally reframing how we should think about AI's impact on education. Rather than viewing generative AI as a problem with a solution waiting to be discovered, their research reveals it exhibits all the characteristics of what planning theorists call a "wicked problem" – a challenge that resists definitive solutions and requires ongoing navigation rather than one-time fixes.
The Ten Characteristics of Wicked Problems and AI in Education
Based on interviews with 20 university dean and department chairs, the study demonstrates how AI in assessment exhibits all ten defining characteristics of wicked problems:
1. No Definitive Formulation
The AI assessment challenge cannot be clearly defined because educators see fundamentally different problems. Some view AI as a workforce preparation tool: "If they're using it in the workforce already […] we can't just say to students, you cannot use it." Others see it as educational fraud: "It's cheap learning, because students end up finishing university knowing zero, having learned zero from day one to the end." Without agreeing on what the problem actually is, institutions cannot develop coherent responses.
2. No Stopping Rule
There are no clear criteria for knowing when you've successfully addressed AI in assessment. Teachers cannot determine when their solutions are adequate: "How do we actually tell? You can't" and "Have I struck that right balance? I don't know." Unlike fixing a broken system where success is measurable, AI challenges offer no endpoint that signals completion.
3. Solutions Are Not True or False, Only Better or Worse
Every AI response involves sacrificing something valuable. One teacher created separate assessment tracks for AI and non-AI use, which served pedagogical goals but "created a huge amount of work, additional work, because it was effectively two assessments and it was a bit of a nightmare." Another noted: "We can make assessments more AI-proof, but if we make them too rigid, we just test compliance rather than creativity." No approach is purely correct or incorrect.
4. No Way to Test Solutions
Educators cannot verify whether their AI policies and adaptations are working. As one teacher explained: "If a student uses AI appropriately for brainstorming, we might never know. If they use it inappropriately, we also might never know." Even technological solutions fail: AI detection software incorrectly flagged human-written work, making it impossible to test effectiveness without real consequences.
5. Cannot Be Studied Through Trial and Error
Every attempt to address AI in assessment carries irreversible consequences. Teachers worry about career impact: "If I'm the only one [changing essays for more novel assessments], I would be punished. I will have less and less students enrolling in my units." Others fear institutional damage: "It's the reputation of the university, not necessarily the student." Each "trial" affects real students, careers, and institutional standing.
6. No End to Possible Solutions
The number of potential approaches to AI in assessment is limitless. Educators described oral examinations, portfolio assessments, industry partnerships, peer review systems, in-class presentations, authentic tasks, hybrid approaches, and countless variations. With infinite possibilities and no clear criteria for selection, many teachers reported feeling "at a loss" about which direction to pursue.
7. Every Problem Is Essentially Unique
AI challenges vary dramatically by context, making universal solutions impossible. Oral assessments work well for small philosophy classes but become impractical for large cohorts: "250 students by five minutes, make it 10 min […] that's, yes, it's like 2500 min." What succeeds in nursing education may fail in business programs. Institutional resources, disciplinary norms, and student populations create unique constraints requiring different responses.
8. Problems Are Symptoms of Other Problems
AI assessment challenges expose deeper structural issues in higher education. Teachers identified AI vulnerabilities as symptoms of problematic business models: "A university like [ours], which is based on a business model, which is online-based, where you cannot incentivize students to come in person, and all the assessments are based on tasks you ask students to do at home in their own time, this model is the most vulnerable to fraud in an age of AI." The AI problem reveals pre-existing cracks in educational systems.
9. Problem Description Determines Possible Solutions
How educators frame the AI challenge constrains which responses seem reasonable. Framing AI as an integrity threat leads to control measures: "I would still prefer exams to come back on campus because it would be the only piece of assessment that we can truly say this is their own work." Framing AI as a professional tool promotes integration: "Students need to be able to use it efficiently." The problem definition predetermines the solution space.
10. Decision-Makers Have No Right to Be Wrong
Educators bear full responsibility for AI-related decisions while lacking control over outcomes. They worry about graduating unprepared students: "Are we in fact sending students out into the workforce who can get through an interview, but when they start doing the job, they can't?" Teachers feel "very, very vulnerable within the university" because their assessment decisions affect student learning, institutional reputation, and professional standards, yet they must make these decisions amid radical uncertainty.
Try my Wicked AI Navigator GPT. Copy and paste in advertising blurbs, research articles, lesson plans, district policies into these GPT to get a detailed analysis from the perspective of the Wicked Problem framework.
The Three Essential Permissions
The researchers propose three "permissions" that educators can give themselves and institutions can support to navigate the wicked problem of AI in assessment:
Permission to Compromise
Traditional problem-solving seeks optimal solutions that maximize all values simultaneously, but wicked problems force impossible choices between equally important priorities. In AI assessment, educators must choose between security and authenticity, workforce preparation and foundational learning, manageability and educational richness. This permission recognizes that "every assessment design (with or without GenAI) involves trade-offs between equally important values" and removes "the toxic burden of pursuing perfection that cannot exist." It allows educators to state plainly that their assessment prioritizes certain values at the expense of others and transforms institutional culture from punishing imperfection to learning from thoughtful trade-offs.
Permission to Diverge
Wicked problems resist universal solutions because context determines everything. What works brilliantly in one setting often fails catastrophically in another, as evidenced by oral assessments that were "really nice" for small philosophy seminars but impossible for large cohorts: "250 students by five minutes, make it 10 min […] that's, yes, it's like 2500 min." Permission to diverge means accepting that "successful practices in one educational context need not – and often should not – be replicated elsewhere" and that "divergent approaches to common challenges can reflect contextual wisdom rather than inconsistency or failure." This transforms institutional expectations from uniformity to fitness for purpose, releasing educators from the assumption that good solutions must be universally applicable.
Permission to Iterate
Assessment design has always been iterative, but AI accelerates the pace of necessary change as capabilities evolve monthly, student behaviors shift each semester, and professional requirements change constantly. Permission to iterate recognizes that continuous adaptation is essential rather than exceptional, requiring systems that support rapid adjustment rather than punishing frequent change. This "transforms assessment from a product to be delivered to a practice to be refined," allowing educators to respond to reality rather than defend outdated plans. When educators can adjust based on learning that dual submission created impossible workload or that AI capabilities exceeded expectations, iteration becomes professional development rather than admission of inadequacy.
The Path Forward: Supporting Educators in a Wicked Bind
As the researchers conclude: "Universities that continue to chase the elusive 'right answer' to AI in assessment will exhaust their educators while failing their students. Those that embrace the wicked nature of this problem can build cultures that support thoughtful professional judgment rather than punish imperfect solutions."
The research reveals the extreme strain educators face when grappling with AI assessment challenges. Teachers describe feeling "at a loss," being "really at a loss, to be honest," and experiencing the burden of decisions where they have "no right to be wrong" yet must navigate radical uncertainty. One teacher expressed feeling "very, very vulnerable within the university running assessments like this" when making assessment decisions. Many participants explicitly acknowledged being overwhelmed: "I'm at a loss. I keep on trying to have conversations with people, and people seem to be at a loss too."
This widespread acknowledgment of stress is crucial because it shifts focus from individual inadequacy to systemic support. Educational institutions must recognize that working with wicked problems inherently creates psychological burden and develop systems that help educators thrive under these conditions. This means creating structures that normalize uncertainty, reward thoughtful experimentation over perfect outcomes, and provide collaborative spaces for shared problem-wrestling rather than isolated struggle.
The path forward requires abandoning the search for silver bullets in favor of developing adaptive capacity. Institutions must build cultures that support educator decision-making rather than mandate uniform responses, recognize divergent approaches as evidence of contextual wisdom rather than institutional inconsistency, and treat assessment iteration as professional development rather than design failure. Most critically, they must acknowledge that "while it is true that wicked problems do not have correct solutions, they do have better and worse responses" and create conditions where educators can develop the wisdom to navigate permanent uncertainty with skill, collaboration, and institutional support rather than individual heroism.
Source: Corbin, T., Bearman, M., Boud, D., & Dawson, P. (2025). The wicked problem of AI and assessment. Assessment & Evaluation in Higher Education. https://doi.org/10.1080/02602938.2025.2553340
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy.



I also just posted a brief research review of this too. Corbin et al usefully clarify the problem ... but a deeper wicked problem remains: how to train educators to apply the "permissions" in practice, i.e., to help them design assessments that balance integrity and authenticity, and to empower them to determine degree of validity rather than defaulting to policing.
The wicked problem inside the wicked problem:
• Layer 1: AI destabilizes assessment itself.
• Layer 2: Even if we accept the “3 permissions” (compromise, diverge, iterate), there is still the unsolved challenge of training educators to use them skillfully.
Formulated more clearly, the wicked problem of "assessment training" is how can educators be prepared to:
1. Exercise the three permissions confidently in real settings (compromise, diverge, iterate),
2. Design assessments that balance workload, integrity, and authenticity, and
3. Judge and maintain validity rather than defaulting to policing?
Not common knowledge, and not an easy task.
Solving the first-order wicked problem of AI and assessment will require solving the second-order wicked problem of educator capability.
I think, humanity should understand that we entered into a new era where you need to entirely transform interms of understanding the new ground rules, what you felt was complex and intelligent is no more called complex and intelligent. Initially we got a help from industries and machinaries made it possible which are physically impossible by a human kind and we are into an era where AI gonna make it possible which are mentally impossible by a human kind. So, clean your slate, rewrite, redefine everything from the beginning and better use of the available resources and focus on what else mind state instead of thinking about the one you are going to loose sooner.