From work to flow: How talking to AI Is changing the way we work, think, and feel
Guest Post from Nigel P. Daly, Ph.D.
Join the Movement → Support Educating AI
You’re in good company — over 7,200 subscribers and 50 paid supporters already trust Educating AI to help them stay informed and ahead of the curve. As we continue to grow, our goal is to double our paid subscribers to 100 by year’s end. Why? Because reaching that milestone unlocks new opportunities on Substack to expand our influence and amplify the vital conversations we foster here.
If you find value in our work and want to help shape the future of AI education and awareness, consider becoming a paid subscriber today. To make it easy, we’re offering a forever 20% discount through this special link:
Your support directly helps us broaden our reach and deepen our impact. Let’s build this together.
From work to flow: How talking to AI Is changing the way we work, think, and feel
The deadline is near and your marketing strategy is almost finished. You know it is completely uninspired. Your coworkers are busy and you don’t want to look foolish or unprofessional by asking for help. You’re stuck and it feels crappy.
Well, that may have been the case a few short years ago.
A March 2025 study from Harvard Business School—The Cybernetic Teammate (Dell’Acqua et al., 2025)—looked at how 776 professionals at Procter & Gamble used Generative AI (GenAI) and discovered something unexpected: working with AI didn’t just boost productivity—it made people feel better about their work.
As expected, the participants using GenAI were more efficient and produced higher-quality results—roughly a 15–18% performance improvement, measured as a 0.37 standard deviation increase in quality.
But the real story?
Those same participants reported more energy, enthusiasm, and emotional satisfaction than their AI-free colleagues. The emotional boost was even more striking than the productivity gains: a 0.457 SD increase in positive emotions—translating to a roughly 20% uplift in feelings like confidence, curiosity, and engagement.
The Cybernetic Teammate study used a 2x2 experimental design involving comparing four groups: individuals and teams working with or without GenAI as a kind of co-pilot, not autonomous agent. Strikingly, individuals using AI outperformed human-only teams, and teams augmented by AI saw the biggest gains in both performance and emotional satisfaction.
Caveat: No one is saying that Cybernetic teammates should replace human teammates
Right off the bat, we should acknowledge that “cybernetic teammates” cannot and should not replace human teammates. It has been shown that real mastery in professional skills depends on the human bonds formed through challenge, complexity, and connection—elements that are difficult, if not impossible, for GenAI to replicate (Beane, 2024). And of course, AI’s confident outputs require close human supervision because AI is prone to faulty reasoning and limited knowledge of context. But even if the reasoning problems can be overcome, the issue of context will remain. As Dahar and Fleming (2025) point out, AI can be good at tasks that require (organizational) explicit knowledge that is publicly available, but flounder for tasks that require tacit and experiential knowledge—i.e, knowledge of context—that resides completely in the minds and bodies of humans.
Nonetheless, it is interesting that at least for certain knowledge tasks, working with an AI co-pilot is more enjoyable than working alone. Further, while other studies (e.g., Boston Consulting Group, 2023) have shown that lower-performing employees tend to benefit the most from AI—with up to a 43% performance boost compared to 17% for top performers—The Cybernetic Teammate study showed equal emotional gains across roles and experience levels. This suggests that AI’s positive emotional impact may be more democratic and consistent than its productivity effects.
When Maslow Meets the Machine …
Cue Maslow’s hierarchy of needs: basic safety, belonging, esteem, and finally, self-actualization. GenAI tools have the potential to support all four levels:
Safety: AI creates a low-pressure space to try, fail, and revise without judgment.
Belonging: Natural language exchanges feel conversational, collaborative and social, especially with the praise that is sycophantic, but it still often feels good.
Esteem: AI-augmented work feels sharper, more polished, more professional.
Self-Actualization: People venture into creative and cognitive spaces they might not have dared, or been capable of, exploring alone.
When work feels more fulfilling, we’re more likely to stay engaged, resilient, and even happy. This emotional shift is where AI's real revolution may lie.
Figure 1. Maslow’s hierarchy of emotional needs and Bloom’s taxonomy of cognitive functions (Created with ChatGPT-4o and edited with Canva).
… cognition can bloom …
AI doesn’t just respond—it can scaffold your thinking. It can support all stages of Bloom’s Taxonomy of cognition, from remembering and understanding to analyzing and creating.
At the root of this positive attitude is how working with AI can be a curiosity driver. “Curiosity is like rocket fuel for LTP formation in the parts of the brain critical to long-term memory storage” (Sarma and Yoquinto, 2021). But used well, AI-induced curiosity can be rocket fuel for not just remembering, but also all the other higher level of Bloom’s taxonomy of cognition.
In the HBR study, even non-experts matched the performance of innovation professionals when using AI. The tool didn't just speed up thinking—it expanded the range of what people felt confident attempting.
The tasks in the study—such as designing product packaging and developing go-to-market strategies—are classic “Creating” and “Evaluating” level activities in Bloom’s taxonomy. AI gave individuals the confidence, knowledge, and structure to engage curiously and deeply in these high-level thinking tasks, regardless of their prior expertise.
That’s emotional transformation through cognitive scaffolding.
… and “flow”
This rise in positive emotion leads to “flow”—a state of deep concentration and engagement identified by psychologist Mihaly Csikszentmihalyi.
Flow occurs when individuals experience a balance between the challenge of a task and their skills, receive immediate feedback, and feel a sense of autonomy and purpose. While the study’s authors did not mention flow states, they implied that AI helped close the gap between task complexity and challenge and the user’s abilities; in particular, "AI allows less experienced employees to achieve performance levels that previously required either direct collaboration or supervision by colleagues with more task-related experience" (p.15).
This means AI assistance can decrease a task’s challenge level and increase the user’s skill level. The result is a wider flow window that not only reduces anxiety and “crappy feelings” but also reduces boredom. In other words, it creates positive emotion.
Figure 02. Flow state—when challenge level meets skill level (created with ChatGPT-4o)
The HBR study’s findings—higher engagement, increased confidence, reduced frustration—are all classic indicators of a flow state. GenAI can help users find this “just right” zone more easily:
Challenge matched with support: AI stretches users’ skill level without overwhelming them.
Immediate, constructive feedback: Responses come in real time, keeping momentum.
Sense of control and direction: Users guide the exchange, revising and refining at will.
These are not just productivity features—they are flow enablers. With thoughtful training and workflow design, GenAI can set the stage for employees to regularly enter flow states to increase not only output but also satisfaction and motivation.
Figure 3. Flow with AI integrating feeling and thinking (Created with ChatGPT-4o and Canva).
Unfortunately, training is sparse.
Obstacles to flow … AI use vs training gap
A Gallup 2024 study found that 47% of Gen Z employees reported using GenAI weekly, yet nearly half said their schools or workplaces offered no clear guidance or training on AI use (Gallup, 2024). This is especially a problem for the younger generation of GenAI users, whose lack of knowledge and life experience tends to put them at a disadvantage when learning these AI tools (Gerlich, 2025)
A 2024 McKinsey report also showed that 91% of employees are using GenAI at work, but only 13% of organizations have implemented multiple use cases, highlighting a significant gap between individual usage and organizational adoption.
A more recent McKinsey report (2025) echoed this training gap: employees are eager to use GenAI, but only 1% of organizations feel they’ve reached maturity in AI adoption.
All of these statistics show that while the emotional and productivity benefits of AI are real, they're not evenly distributed—and training will play a key role in unlocking them.
Training for AI fluency
To realize this potential, organizations must go beyond onboarding. They must teach flow-friendly AI usage—how to choose the right kinds of tasks, set up effective prompts, evaluate progress, and adjust difficulty.
But this remains an unresolved challenge given that GenAI is a general-purpose technology, or what Narayanan and Kapoor (2025) recently called a normal technology, like electricity or the internet. But unlike electricity or the internet, GenAI can directly affect and dialogically interact with thinking, making it a kind of unprecedented cognitive electricity.
When used well, GenAI almost becomes part of the user’s thinking process, a kind of System 0 (Chiriatti et al, 2024) that can scaffold what Kahneman (2012) called System 1 (intuitive) and System 2 (analytical) thinking. This interface, which I have called cognitive bleed, can either enhance or diminish our cognitive abilities (Daly, 2024).
This is a new cognitive and learning playing field, and we’re still trying to figure out the rules of the game. What are the right skill sets, approaches, and attitudes to achieve the best workflows with AI?
And, what are the risks?
What if AI becomes the perceived “smartest voice in the room”? We risk, as Dana Dahar recently commented, “shifting the center of gravity” in collaborative work. What happens to human trust, growth, and judgment when AI becomes the anchor in a team? Designing training and norms to both ensure AI supportive function and prioritize human control and development will be essential.
Although we should be very wary of how we anthropomorphize AI, it is generally accepted that to get the best results, it is a good strategy to treat GenAI like a human, or as Mollick warily suggests, “an infinitely fast intern, eager to please but prone to bending the truth” (p.57). The HBR researchers also call it a “teammate.”
Unleashing the emotional and cognitive benefits of working with an AI “teammate” thus requires a shift in how we train workers and students.
And while the training big picture remains foggy, it is getting clearer. First, we need to move beyond the passive idea of tool “literacy” toward a more active “AI fluency”—a type of communicative competence that blends technical skill, emotional awareness, and strategic thinking (Daly, 2025a). This will help us learn how to turn work-flows into communicative and fluent word-flows with our AI teammates. In other words, we need to learn how to speak and become fluent in the “language of flow.”
Along these lines a number of principles are rising to the surface for both organizational and educational training.
Implications for training: From collaboration to cognition to ritual
AI training means going beyond individual prompting skills to collaborative and even adopting ritualistic AI workflows that enable humans to co-create with AI without marginalizing human voices.
1. Train the team, not just the individual
The Cybernetic Teammate study showed that individuals using AI performed better than human-only teams and experienced more positive emotions while working. But teams using AI outperformed all other groups. This suggests that AI fluency should be developed at both the individual and group level. Training should include how to co-prompt, co-edit, and negotiate meaning when humans and AI share a task. Teams must learn how to interpret AI output critically, assign trust proportionally, and integrate AI as a flexible contributor without letting it dominate the interaction.
In other words, we need metacognitive strategies.
2. Teach metacognition for AI interaction
If AI becomes the most efficient, fluent, or "emotionally satisfying" partner in the room, there's a risk of "shifting the center of gravity" in collaboration. Students and workers may overly defer to AI or undervalue human insight. To prevent this, training should incorporate metacognitive strategies: prompting learners to reflect on why they trust AI, when they rely on it too much, and how their own inputs shape AI outputs. By learning to think about their thinking with AI, users can better guard against overreliance and cultivate cognitive resilience.
But metacognitive strategies are not enough. Good physical, mental, and ethical hygiene come from embodied habits and rituals.
3. Reclaim ritual: The power of practice
Finally, perhaps a broader cultural shift is required. Modern Western education tends to privilege theory over practice, cognition over embodiment. But preventing AI overreliance—or even addiction—may require something older and more embodied: ritualized behaviors that guide thinking, regulate emotional response, and encode expertise through repetition. I have written about some practices to help achieve a digital balance with AI, like the following (Daly, 2025b):
physically organizing your workspace,
structuring prompting routines that make sure there is human cognitive friction at different points before, during, and after AI exchanges;
engaging in breathwork and mindfulness techniques to maintain focus and attention;
setting clear boundaries for AI use and “deep work”;
interspersing analog practices like note taking and idea mapping to transfer ideas and information across digital to cognitive to physical media.
In short, training must move from literacy to fluency, from isolated cognitive and language skills to social and embodied practice. Only then can AI become not just a tool, but a co-agent in deeper learning, collaboration, and personal growth. AI has the potential to augment, not atrophy, our cognitive and creative abilities. But this requires habits of mind. And body.
References
Beane, M. (2024). The skill code: How to save human ability in an age of intelligent machines. Harper Business.
Boston Consulting Group. (2023). How people can create—and destroy—value with Generative AI. https://www.bcg.com/publications/2023/how-people-create-and-destroy-value-with-gen-ai
Chiriatti, M., Ganapini, M., Panai, E., Ubiali, M., & Riva, G. (2024). The case for human–AI interaction as system 0 thinking. Nature Human Behaviour, 8(10), 1829-1830.
Csikszentmihalyi, M. (1990). Flow: The psychology of optimal experience. Harper & Row.
Dahar, S., and Fleming, R. (April 21, 2025). Protect human expertise in the age of generative AI. HFS Research. https://www.hfsresearch.com/research/protect-human-expertise/
Daly, N.P. (Nov. 29, 2024). Cognitive bleed: Towards a multidisciplinary mapping of AI fluency. Cognitive Bleed Substack.
Daly, N.P. (Feb 03, 2025). Educating for AI fluency: Managing cognitive bleed and AI dependency. Educating AI Substack. https://nickpotkalitsky.substack.com/p/educating-for-ai-fluency-managing
Daly, N.P. (Jan 19, 2025). Achieving NirvanAI and overcoming AI addiction. Cognitive Bleed Substack.
Dell’Acqua, F., et al. (2025). The cybernetic teammate: A field experiment on Generative AI reshaping teamwork and expertise. Harvard Business School Working Paper 25-043. https://www.hbs.edu/faculty/Pages/item.aspx?num=67197
Gallup. (2024). Gen Z insights: 2024 study with the Walton Family Foundation. https://www.gallup.com/analytics/651674/gen-z-research.aspx
Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1).
Kahneman, D. (2012). Two systems in the mind. Bulletin of the American Academy of Arts and Sciences, 65(2), 55-59.
McKinsey & Company. (2024). Superagency in the workplace: Empowering people to unlock AI’s full potential at work. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work
McKinsey & Company. (2024). Gen AI’s next inflection point: From employee experimentation to organizational transformation. https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/gen-ais-next-inflection-point-from-employee-experimentation-to-organizational-transformation
Mollick, E. (2024). Co-Intelligence: Living and working with AI. Portfolio/Penguin.
Narayanan, A., & Kapoor, S. (2025, April 15). AI as normal technology. An alternative to the vision of AI as a potential superintelligence. Knight First Amendment Institute at Columbia University. AI as Normal Technology | Knight First Amendment Institute
Sarma, S., & Yoquinto, L. (2021). Grasp: The science transforming how we learn. Anchor.
Ecellent. Especially powerful was the way Bloom and Maslow can be supercharged with FLOW thanks to AI. I am seriously troubled by my education colleagues who only see AI as a threat.
More good stuff, Nigel!
I found this part particularly interesting from what I've been reading recently: "What happens to human trust, growth, and judgment when AI becomes the anchor in a team?"
Check out Lee, et al. (2025, "The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers") for more support for your idea here. A quote from their abstract: “…higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship.” (I'll be referencing it for my own presentation at MCU's conference on Saturday morning)
In any case, Looking forward to your keynote on Saturday. Scott has asked me to introduce/moderate for you, so see you then! ;)