The AI Meeting Bot Problem: Built for Business, Breaking Education
Another Vendor-Created Training Emergency—This Time It's Meeting Bots
Thank you for engaging with this article. This is not legal advice. Decisions about AI meeting transcription tools are high-stakes and require input from legal counsel, IT leadership, curriculum specialists, and special education coordinators. Districts need all these voices at the table.
If you find value in these conversations, please consider liking and sharing with your network. To the readers who have taken the plunge into paid subscriptions—your continued support makes these twice-weekly publications possible. Thank you for investing in this work.
When Zoom quietly rolled out its AI Companion feature, and Microsoft embedded Copilot into Teams, educational institutions found themselves facing yet another technology-first, policy-later crisis. Across the K-16 landscape, administrators are now scrambling to understand FERPA implications, assess vendor contracts, train staff, and manage a tool that many faculty and students have already been using for months without oversight.
This is not a story about innovation. This is a story about an ungovernable space that tech companies created and educational institutions are now desperately trying to contain.
The Familiar Pattern
To be fair, AI meeting transcription isn’t entirely new territory. Tools like Otter.ai and Rev have offered automated transcription for years, and the educational benefits are real: accessibility accommodations for deaf and hard-of-hearing students, support for non-native English speakers, records for students who miss class, and administrative efficiency for overburdened staff. These are legitimate use cases that deserve recognition.
But the current proliferation of AI meeting tools represents something different—a shift from opt-in specialized services to default-on features embedded in the platforms we use every day. When AI transcription requires a deliberate choice to activate a separate tool, institutions can manage access, vet vendors, and create clear policies. When it’s a button click away in your standard Zoom meeting, the governance model collapses.
The Policy Scramble
The institutional response has been predictably chaotic. Some organizations, like the American Society of Association Executives, have implemented blanket bans: “the use of any AI notetaking tools is prohibited.” Others, like Harvard, have adopted enterprise-only frameworks that permit “approved tools as part of limited HUIT-directed pilot programs.” Still others have developed Byzantine content-based restriction systems that categorize meetings by sensitivity level, permitting AI notes for routine discussions while prohibiting them for “performance evaluations, disciplinary matters, or discussions with students.”
The University of California San Diego requires giving participants a “meaningful opportunity to object” to AI recording—a procedural solution that places the burden on the least powerful people in the room to speak up against their supervisors, professors, or colleagues.
These varying approaches aren’t evidence of thoughtful local customization. They’re evidence of confusion. When institutions reach dramatically different conclusions about the same technology under the same federal privacy laws, it suggests the underlying problem is ungovernable.
Why This Space Defies Governance
The fundamental challenge is that AI meeting transcription exists in a control vacuum. Even institutions that implement comprehensive policies face insurmountable enforcement problems:
External participants operate outside your authority. Your carefully crafted policy means nothing when half the meeting participants are joining from outside your institution. They can activate Otter.ai, Fireflies.ai, or any number of third-party tools regardless of your wishes.
Technical controls are incomplete at best. While Zoom allows admins to disable AI Companion, block certain bot domains, and require CAPTCHAs, determined users can still run AI applications locally on their own devices. Some bots have been reported to bypass waiting rooms. The multi-layer strategy required to approximate control—disabling features, blocking domains, submitting support tickets to block SDK-based bots, manually monitoring and ejecting bots—is administratively exhausting and still imperfect.
The litigation exposure is staggering. Legal experts warn that “unlike human-generated notes, which can fall under the protections of attorney-client privilege or other confidentiality safeguards, AI-produced content could be seen as a neutral document and easily discoverable by opposing parties during legal proceedings.” Every automatically transcribed meeting creates a comprehensive, timestamped record that wouldn’t otherwise exist—a gift to opposing counsel in future disputes.
FERPA compliance is ambiguous. When do AI-generated meeting transcripts become education records? Legal analysis suggests that “any meeting transcripts or AI-generated notes that contain personally identifiable information about a student and are maintained by the school (or a contractor on its behalf) likely qualify as education records under FERPA.” But institutions have reached different conclusions about what this means in practice, and the Department of Education has provided little clarifying guidance.
Accuracy issues undermine their core purpose. AI transcription makes errors—sometimes significant ones. A.T. Still University acknowledges that “AI notes may contain errors and should be reviewed for accuracy before wide-spread distribution.” Yet the value proposition of these tools is that they save time by eliminating manual note-taking. Requiring human review of every transcript defeats the efficiency argument while still exposing institutions to the privacy and legal risks.
The Accessibility Dilemma
Perhaps most frustratingly, institutions find themselves caught between competing obligations. When students or employees request AI transcription as an accessibility accommodation, privacy policies collide with ADA requirements. Harvard’s policy acknowledges this tension: “individuals who wish to request access to AI supported notetaking for accessibility accommodations should contact their Local Accommodations Coordinator to discuss options.”
This forces case-by-case adjudication of requests that should be straightforward. It also highlights how vendor-driven deployment timelines—not institutional readiness—dictate when these policy conflicts emerge.
What Educational Institutions Must Do Now
The situation is imperfect, but institutions cannot simply hope for the best. Here are necessary steps:
Immediate Organizational Actions
Conduct a technology audit. Identify every platform used for meetings across your institution—Zoom, Teams, Google Meet, WebEx—and determine the current status of AI features. Are they enabled? Who has control? What’s the default setting for new accounts?
Disable default-on AI features at the account level. For Zoom, disable AI Companion with the “Hidden in Meetings toolbar” setting. For Microsoft Teams, work with your IT department to restrict Copilot access through enterprise controls.
Implement a multi-layer blocking strategy for external bots:
Remove third-party AI apps from your Zoom Marketplace
Enable authentication requirements (institutional login only when possible)
Enable waiting rooms for all meetings
Block known AI bot domains (otter.ai, fireflies.ai, read.ai) in account settings
Enable CAPTCHA for guest users
Submit support tickets to Zoom to block specific SDK-based bots
Train meeting hosts to manually remove any bots that do join
Draft a clear, enforceable policy that addresses:
When AI transcription is prohibited (any meeting involving student information, personnel matters, privileged legal discussions)
When it requires consent from all participants
What “meaningful opportunity to object” actually means procedurally
How accommodations requests will be handled
Retention requirements and deletion timelines for any transcripts that are created
Establish vendor review requirements. No AI transcription tool should be used without a privacy and security review confirming FERPA compliance, appropriate data use limitations, and contractual protections. The burden should be on the vendor to document compliance, not on individual faculty to assess it.
Mandatory Staff Training Components
For educational institutions where online meetings are the primary modality, comprehensive training is not optional. Every staff member who conducts meetings needs to understand:
Legal and privacy fundamentals:
FERPA basics and why meeting transcripts containing student information are education records
Discovery implications of creating comprehensive, timestamped records
Why “I’ll just delete it later” doesn’t work (litigation holds, retention policies)
Consent procedures:
How to notify all participants at the beginning of meetings if AI transcription will be used
Why silence is not consent
How to document objections and what to do when someone objects
Special considerations for meetings involving students, where power dynamics make “opportunity to object” especially problematic
Technical controls:
How to check whether AI features are active in their meetings
How to identify AI bots in the participant list (they often appear with labels like “Otter.ai” or “Fireflies Notetaker”)
How to remove unauthorized bots from meetings
Platform-specific settings (where is the AI Companion button in Zoom? Where are Copilot settings in Teams?)
Alternative approaches:
When and how to request approved institutional transcription services
How to request accommodations for accessibility needs through appropriate channels
Best practices for traditional note-taking and collaborative note documents
Real scenarios and decision trees:
“A student asks me to record our advising meeting so they can refer back to it. What do I do?”
“I’m in a committee meeting and notice an Otter.ai bot has joined. What do I do?”
“A colleague from another institution joins my meeting and mentions they’re using AI notes. What do I do?”
This training cannot be a one-time orientation video. It requires regular reinforcement, especially as platforms update their features and new tools emerge.
The Larger Failure
The AI meeting notes crisis is symptomatic of a larger failure in how educational technology gets deployed. Vendors prioritize market penetration over institutional readiness. Features launch with inadequate privacy protections and minimal consideration of the regulatory environment in which educational institutions operate. Compliance becomes the customer’s problem, even though customers lack the leverage to demand better terms or meaningful controls.
Educational institutions, for their part, have been too passive in demanding that vendors build for the regulated environments we inhabit. We accept terms of service written for corporate users and then spend countless hours trying to retrofit compliance. We allow default-on features instead of requiring opt-in architectures. We fail to coordinate sector-wide responses that might actually create leverage with vendors.
The result is what we see now: a technology that offers genuine benefits for accessibility and organizational efficiency, undermined by a deployment model that makes it nearly impossible to use responsibly in educational settings. Institutions are left managing an ungovernable space, crafting policies they cannot fully enforce, and hoping that the legal exposure they’ve accumulated never materializes in actual litigation.
We deserved better than this. More importantly, our students deserved better than this.
The question now is whether we’ll continue accepting whatever vendors deploy, or whether we’ll finally demand that educational technology be built for education—with privacy protections, institutional controls, and regulatory compliance designed in from the start, not bolted on as an afterthought.
Until then, the most responsible path forward is aggressive restriction, comprehensive training, and constant vigilance. It’s an exhausting posture. But in an ungovernable space, it may be the only option we have.
Nick Potkalitsky, Ph.D.
Check out some of our favorite Substacks:
Mike Kentz’s AI EduPathways: Insights from one of our most insightful, creative, and eloquent AI educators in the business!!!
Terry Underwood’s Learning to Read, Reading to Learn: The most penetrating investigation of the intersections between compositional theory, literacy studies, and AI on the internet!!!
Suzi’s When Life Gives You AI: A cutting-edge exploration of the intersection among computer science, neuroscience, and philosophy
Alejandro Piad Morffis’s The Computerist Journal: Unmatched investigations into coding, machine learning, computational theory, and practical AI applications
Michael Woudenberg’s Polymathic Being: Polymathic wisdom brought to you every Sunday morning with your first cup of coffee
Rob Nelson’s AI Log: Incredibly deep and insightful essay about AI’s impact on higher ed, society, and culture.
Michael Spencer’s AI Supremacy: The most comprehensive and current analysis of AI news and trends, featuring numerous intriguing guest posts
Daniel Bashir’s The Gradient Podcast: The top interviews with leading AI experts, researchers, developers, and linguists.
Daniel Nest’s Why Try AI?: The most amazing updates on AI tools and techniques
Jason Gulya’s The AI Edventure: An important exploration of cutting-edge innovations in AI-responsive curriculum and pedagogy.





Yes, heart for unpacking complicated issues and recommending mitigation.
not the "heart" because I am fond of this complexity. "heart" because you are sharing this story. This is the somewhat hidden work of new mediums in education, whenever they show up, and thanks for sharing. Most needed.