What If We Created a World Where Our Kids Need AI to Survive?
AI’s Origins in a Culture of Hypercompetition and Radical Individualism
As I send another graduating class into the future, a timely piece by David Brooks got me thinking more broadly about the world we're shaping. This essay is a bit of a departure from my usual techno-pragmatic writing, which tends to focus on making the best of tough situations. I hope you enjoy it. If you do, consider sharing it with a friend, subscribing, or becoming a paid supporter of our work at Educating AI.
There's a line in David Brooks' latest piece, "We Are The Most Rejected Generation,” that I can't stop thinking about. A Williams College student, explaining his generation: "We are the most rejected generation."
The numbers are staggering as cited by Brooks. Harvard rejects 52,050 out of 54,000 applicants. Goldman Sachs crushes the dreams of 312,300 out of 315,000 internship hopefuls. Students fill out 250 applications just to feel safe. A student club at Berkeley—a student club—rejects 994 out of 1,000 applicants.
But here's what haunts me: we're living through the rise of AI precisely when young people need it most desperately just to survive the gauntlet we've built for them.
The Numbers Behind the Machine
Walk into any college career center today and you'll see the shift. Students aren't asking how to write better cover letters—they're asking which AI tools write the best ones. Career counselors report a fundamental change in office hours: less time spent on self-reflection and skills development, more time troubleshooting AI outputs and discussing whether applications sound "too robotic."
The tools are everywhere: ChatGPT for cover letters, Claude for personal statements, AI resume optimizers, even platforms that use artificial intelligence to prep for video interviews at companies that use AI to screen candidates. We've created a feedback loop where artificial intelligence trains people to beat artificial intelligence.
We've created systems so impossibly competitive, so systematically brutal, that human beings literally cannot navigate them without artificial assistance. The cruel irony? The same technological forces eliminating their future jobs are the ones they're desperately turning to for survival. It's like watching someone drink poison because they're dying of thirst.
When Being Human Becomes a Bug, Not a Feature
Brooks writes about students becoming "masters of impression management"—perfect elevator pitches, beaming smiles, carefully crafted narratives. But what happens when everyone has access to AI that can craft the perfect narrative, optimize the perfect pitch, generate the perfect application?
Here's what I keep hearing from recent graduates: the disorienting experience of spending months crafting authentic personal essays, only to see what AI can produce in seconds—polished, compelling, strategically optimized. The human effort feels inadequate by comparison. Their real stories, full of genuine uncertainty and growth, feel boring next to artificially generated narratives designed to hit every admissions algorithm perfectly.
We're not just raising a rejected generation. We're raising the first generation that's learning their authentic self isn't competitive—that human messiness, genuine uncertainty, and actual growth require artificial enhancement to even register in our systems.
Think about what this means: We've accidentally created selection processes that favor artificial intelligence over human intelligence, algorithmic optimization over genuine insight, perfect polish over authentic potential.
How We Got Here (And Why It Matters)
This didn't happen overnight. The roots trace back to decisions that seemed reasonable at the time:
Grade inflation made GPAs meaningless, so colleges relied more heavily on extracurriculars and essays—which are easier to game with AI.
Online applications made it trivially easy to apply everywhere, flooding every opportunity with applicants and forcing institutions to use algorithmic screening.
Rankings and metrics pushed schools and companies to optimize for selectivity rather than fit, creating perverse incentives to reject as many people as possible.
Economic anxiety convinced parents that only elite credentials could guarantee security, intensifying competition for a fixed number of spots.
Each decision made sense in isolation. Together, they created a system where being genuinely human—uncertain, still-growing, authentically flawed—became a competitive disadvantage.
The Psychological Toll We're Not Counting
The anecdotal research Brooks cites is devastating: constant rejection makes people more aggressive, less empathetic, worse at self-control. It attacks our core human needs for belonging, agency, and competence.
But there's a deeper wound we're not talking about. When you need AI to write your cover letter, optimize your essays, and manage your professional identity, what happens to your sense of authentic self?
I keep thinking about the student who told Brooks that "none of her friends are doing long-term thinking or saving for a mortgage. The world seems so radically unstable to them that they'd rather enjoy what they can today than sacrifice now for some possibility that may never come to pass two decades from now."
That's not just economic anxiety—that's a generation learning that human agency is an illusion, that their authentic selves aren't worth investing in, that the future belongs to whoever can best optimize their artificial intelligence.
The Counter-Argument Worth Considering
Some will argue this is just evolution—that competition has always been fierce, that new tools always emerge to help people compete, that this generation is just experiencing what every generation faces in new forms.
There's truth to this. But here's the crucial difference: previous tools helped people become more themselves (better education, clearer communication, broader networks). AI tools increasingly replace core human functions—creativity, reasoning, even personality expression.
When a resume-writing service helps you present your experience clearly, that's enhancement. When AI generates your personality and life story from scratch, that's replacement. We've crossed a line from augmenting human potential to manufacturing it artificially.
What Actually Changes Everything
Brooks ends with "there must be an easier way to grow up." But the problem isn't difficulty—it's that we've made being human insufficient.
The path forward isn't banning AI or returning to some imaginary golden age. It's redesigning our systems to reward what humans do best while letting AI handle what it does best.
What this looks like in practice:
Instead of personal essays, colleges could use structured conversations—harder to fake with AI, better at revealing actual thinking.
Instead of keyword-optimized resumes, companies could use skills-based assessments and trial projects—showing what someone can actually do, not how well they can optimize for algorithms.
Instead of application rat races, institutions could use rolling admissions, regional quotas, or lottery systems among qualified candidates—reducing the incentive to artificially perfect every application.
Instead of early specialization, we could value intellectual curiosity and late bloomers—rewarding the authentically human journey of figuring out who you want to become.
The Choice We're Making
The most rejected generation has been forced to choose between authenticity and survival. They've learned that being genuinely human—uncertain, growing, flawed, real—isn't enough to participate in the institutions that shape their futures.
This is a choice we made, not an inevitable outcome. We optimized for efficiency over humanity, metrics over meaning, artificial perfection over authentic potential.
But we can make different choices. We can build systems that bring out the best in people instead of filtering them out. We can create institutions where AI handles the mechanical work so humans can focus on the creative, connective, and deeply personal work that only we can do.
The question isn't whether this generation will adapt to our systems. They already have—by becoming partially artificial themselves. The question is whether we have the wisdom to build systems worthy of their humanity.
Because right now, we're raising the first generation in human history that needs artificial intelligence not to thrive, but simply to be seen as human.
And if that doesn't break our hearts enough to change how we've organized the world, I'm not sure what will.
Nick Potkalitsky, Ph.D.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: An cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting edge innovations in AI-responsive curriculum and pedagogy.
"It's redesigning our systems to reward what humans do best while letting AI handle what it does best."
What is it about this drug that education can't just say it is bad. Large Language Model Generative Artificial Intelligence does not do anything well. Why are we bending over backwards to make sure our students are addicted. Everything you said is true. We made this gauntlet. So we need to change this gauntlet. Large Language Model Generative Artificial Intelligence IS A TOOL OF THE GAUNTLET. If we want something different then we don't use Large Language Model Generative Artificial Intelligence.
Thanks for this piece, Nick, especially at commencement time, when reflections about learning and life briefly get their due. You know that I agree with you about the dangers here and what AI has done to the humans using it. I also agree with a more pragmatic approach, and I think small-scale, content specific Ai-enhanced projects or mentor bots can help students. But more than anything, I think every school (and course) needs to focus on AI literacy from middle school on - how do these systems work? what are they good for? what are the ethical challenges? how are they using resources and impacting the planet?
You are absolutely right that students, especially young adults, feel a loss of agency and ability to change anything in their lives. Encouraging critical thinking about a technological and cultural transformation of this scope is a starting place.
As for earlier elementary grades, I’ll go out on a limb and say AI should not be part of the curriculum - at all. Students need to learn how to read, write, and do arithmetic (math through algebra, I’d say) before AI is an effective tool. Learning how to think and relate to other humans has to come first if we don’t want the world to sink further into artificiality at the behest of deceptive Silicon Valley billionaires like Sam Altman: https://open.substack.com/pub/marthanichols/p/sam-altmans-ai-juggernaut-can-it?r=lh6m5&utm_medium=ios