Preface by Marcus Brauchli, aiEDU Board Member, managing partner of North Base Media, former executive editor of the Washington Post and the Wall Street Journal:
“I have spent nearly half a century watching technologies redraw journalism’s map—from cold type to broadband, from teletext to infinite scroll. Each leap expanded not only the reach of journalists but also of everyone else. With the good came bad: error, rumor, and propaganda spread, too. When I helped to transition The Wall Street Journal and later The Washington Post into truly digital newsrooms, we learned that readers would need new confidence in our work, because they were being inundated with alternative narratives.
Generative AI compresses the stakes. It has turbocharged the wave of mis-, dis-, and synthetic information. A teenager with a gaming laptop can now generate a convincing voice clone or fabricate a photorealistic video in minutes and for pennies. “Seeing” is no longer believing; it is merely a hypothesis awaiting evidence. Unless we embed critical-thinking habits—and the institutional guardrails that support them—at every level of education, governance, and media, synthetic content will corrode the very idea of a shared reality.
The post that follows argues for finishing that unfinished job. It details how deepfakes threaten schools, elections, and everyday trust, and it outlines what educators, policymakers, technologists, and newsrooms must do, together, to rebuild the public’s defenses. AI has not changed the assignment; it has simply moved the deadline from “someday” to “right now.”
Deepfakes are becoming a crisis
Sometime in mid-June, an unknown actor created a Signal account labeled “Marco.Rubio@state.gov,” harvested 20 seconds of the U.S. Secretary of State’s public remarks, and began leaving voicemails on the Signal accounts of a U.S. governor, a member of Congress, and three foreign ministers.
A cable on July 3 raced through every U.S. embassy, concluding that the imposter “likely aimed to manipulate targeted individuals…with the goal of gaining access to information or accounts.” The most important diplomatic network in the world was nearly catfished by an off-the-shelf $19.99 voice-cloning model.
Thus far, the State Department has shared limited details beyond what can be found in public reporting. “The department is aware of this incident and is currently monitoring and addressing the matter.” We don’t know if any sensitive info was breached, or otherwise how close a call this was.
Outside experts were blunter.
Hany Farid, a professor of digital forensics at UC Berkeley, noted that operations like this do not require sophisticated actors, and can take advantage of gaps in security practices by busy government officials.
“You just need 15 to 20 seconds of audio of the person, which is easy in Marco Rubio’s case. You upload it to any number of services, click a button that says ‘I have permission to use this person’s voice,’ and then you type what you want him to say,” said Farid. “Leaving voicemails is particularly effective because it’s not interactive.”
While the attempted Rubio DeepFake plot has dominated national headlines and broadcast TV, it’s not the first of its kind. In May, an impersonator breached White House Chief of Staff Susie Wiles’s phone, gaining access to her contacts and sending a series of “malicious text and voice messages,” according to the FBI.
In June, Ukraine’s Security Service revealed a plot by Russian spies who were impersonating senior agency officials in an effort to recruit Ukrainian civilians for sabotage missions. Also last month, the Canadian Anti-Fraud Centre and the Canadian Centre for Cyber Security shared details about scammers who were using AI to create deepfakes of government officials in a rash of phone calls and text campaigns with the aim of stealing secret information, money, or inserting malware into protected computer networks.
In a post on X, Obama advisor David Axelrod put it plainly:
“This is the new world in which we live and we’d better figure out how to defend against it.”
One political insider I spoke with said that the story has created a “shitstorm” in several governors’ and congressional offices. Everyone sees what David Axelrod tweeted about: everything is about to change.
During the 2024 Republican primaries, AI-altered photos surfaced depicting Donald Trump kissing Anthony Fauci—showing how cheaply a frontier model can fabricate a moment that never happened. That same winter, thousands of New Hampshire voters got an AI-voiced robocall in Joe Biden’s folksy drawl telling them to “save your vote for November,” a stunt now at the center of a voter-suppression trial. Even the infamous “drunk Pelosi” clip—slowed down to slur her speech—keeps bubbling back into feeds as ever-smarter tools sharpen the fakery. The net-net is that misinformation managers have gone from hiring video crews to renting GPUs by the hour.
We will probably look back fondly on the 2024 election as a bygone era.
DeepFake technology has made significant strides over the past year, with hyper-realistic AI video now significantly easier and cheaper to generate. As Axelrod put it, we have found ourselves in a new world. Only someone blinded by optimism and naiveté could argue that future elections won’t feature increasingly disruptive and dangerous use of DeepFakes, whether by the hands of malicious domestic actors, or, more likely, cunning foreign interference.
And it’s hard to overstate the risks that these DeepFakes pose in a world whose geopolitics grow more fraught and complex by the day. It’s been many years since my days studying international politics, so I’ll leave the in-depth analysis to smart folks like Edward Wong at the New York Times and John Hudson at The Washington Post who were among the first to cover the story.
Setting aside legitimate and serious questions about national security, the episode with Secretary Rubio should be capturing the attention of leaders in the education space. While superintendents, university presidents, school principals, and nonprofit leaders may not be responsible for averting nuclear conflict and managing delicate relations with America’s complex web of allies and strategic partners, they manage massive institutions that can serve hundreds of thousands, if not millions, of students.
A totally plausible scenario:
It’s 9:47am on a Tuesday. Parents in River Oaks Unified receive a frantic robocall from what your caller ID labels a district phone number. Superintendent Lisa Norwood, her voice unmistakable, warns parents about an “active bomb threat” at three elementary schools, urging families to “self-evacuate immediately.”
Seconds later, the same message, complete with a selfie-style video of Ms. Norwood, surfaces on several parent Facebook groups. District leadership quickly determines the message is a DeepFake, but now has to contend with a traffic jam of parents flooding the school parking lot. Local news vans are en route, and before an official statement can be prepared, local TV chyrons blare: “District superintendent issues emergency order—buildings evacuated.”
This might sound dystopian, but remember that we’ve already run a dress rehearsal. Last spring, an athletic director in Baltimore County used AI voice cloning to fabricate audio of Pikesville High School’s principal spewing racist slurs. The clip “quickly went viral and divided the school’s community,” forcing the principal to take leave while investigators and hate-crime hotlines fielded threats to his family. The year before, students in Carmel, N.Y., deep-faked a middle-school principal into a profanity-laced tirade, and by 2025 a National Education Association survey found half of educators had seen deepfakes mislead students on campus.
If nothing else, the Marco Rubio incident should remind us that we’re even closer to these worst-case scenarios than we think.
How easy is it to create a deepfake these days?
Pretty damn easy.
The clone of my voice below took me less than five minutes to create using 10 seconds of audio from a recent aiEDU Studios interview and the free version of Eleven Labs. I conducted an unscientific test on my mom, and even after I jumped on the phone to explain that she had been listening to an AI version of my voice, it took her a solid minute to figure out what I meant. Through the low-fi crackle of a cell phone call, the minor imperfections that you may have noticed disappear.
This is where we are. DeepFakes have become run-of-the-mill, off-the-shelf tools available to anyone with an internet connection.
Everything is about to change
I’ve written about DeepFakes before. It’s clear they aren't solely a problem for those in halls of power. Everyone, from young children to pensioners, will have to deal with the fact that we have entered an era in which any digital signal. Voice, video, photos, “live” Zoom calls, and text messages can be synthesized on demand. A year ago, AI-generated content was eerily good.
Today, it’s indistinguishable. A study published in February with 2,000 participants found that just 0.1% of them could accurately identify AI images and videos.
Here are some more examples in case you missed them:
Celebrities
A gossip weekly in Germany thought it had landed the interview of the decade with Formula 1 legend Michael Schumacher—except every quote was conjured by ChatGPT while the real Schumacher, still unable to speak after his 2013 accident, sat miles away in guarded privacy. Across the Atlantic, scammers deep-faked CNN’s Wolf Blitzer and CBS’s Gayle King into Facebook ads peddling miracle pills, then pasted Tom Hanks’s face onto a dental-plan commercial so slick that Hanks himself jumped on Instagram to shout “I have nothing to do with it.” An analysis by Channel 4 News in London found that nearly 4,000 celebrities have been victims of deepfake pornography.
Corporate fraud
Mark Read, CEO of advertising giant WPP, almost green-lit a hush-hush wire transfer after sitting through a Teams call with what looked and sounded like… Mark Read. The impostor looped pre-recorded video, deployed a flawless voice clone, and came within a single approval click of walking off with company cash. A month earlier, a Hong Kong bank employee wired $25 million to crooks who had recreated the firm’s entire finance team in a deep-fake Zoom room. North American losses tied to synthetic-identity scams jumped 17-fold last year, and investigators keep repeating the same grim mantra: detection tools are still a step behind the forgeries.
Entertainment & legacy media
Remember “Heart on My Sleeve”—the banger featuring Drake and The Weeknd that neither artist recorded? It racked up 15 million streams before Universal Music yanked it from Spotify and TikTok, proving a laptop producer can now mint chart-ready vocals without paying a vocalist. In Portland, “AI Ashley” hosts a five-hour midday radio shift, her banter scripted and spoken by RadioGPT while the human Ashley Elzinga works mornings. Add whole libraries of AI-narrated audiobooks quietly flooding Audible and you get a media landscape where the voice in your AirPods may never have passed through a human throat.
Customer service
Air Canada found out the hard way when its website chatbot invented a generous bereavement fare, coaxed a grieving passenger into buying an overpriced ticket, and left the airline on the hook after a judge ruled a bot’s promise is still a promise. Municipal agencies—tax boards, benefit hotlines, even 911 overflow centers—are racing to bolt similar systems onto aging budgets. Soon the difference between getting a refund, a food-stamp appeal, or a snow-day update could hinge on whether you can out-argue a large language model. In a world where a synthetic Rubio can ring up foreign ministers, mispriced flights are just the opening act.
Social media
Open up Instagram, TikTok, or your preferred source of brain-rot, and it won’t be long until you encounter (knowingly or not) an entirely synthetic AI account. These accounts are racking up millions of followers, press coverage, and even brand deals without ever setting foot in a studio. Aitana Lopez, the pink-haired Barcelonan billed as “Spain’s first AI model,” earns up to €10,000 a month posing for fitness brands. Then there’s Noonoouri, a doe-eyed fashion doll who parlayed 500,000 Instagram followers into a Warner Music record deal. Her debut single, Dominoes, features a generative-AI vocal track and puts her on the same royalty splits as human artists. The Finnish phenomenon Milla Sofia drives the point home. She documents yacht cruises and faux Santorini sunsets for an audience, The Independent pegs at more than 330,000 Instagram followers, while phone-accessory retailer Tyyliluuri touts her as “an exciting step toward a new and innovative direction for our brand.” Social clout (and record deals, seemingly) no longer require a pulse.
There isn’t an easy fix in sight
Faced with what looks like a cybersecurity problem, most districts have focused on drafting “safe and responsible use” policies for AI. Sensible, yet almost useless against deepfakes. Even the strictest Acceptable Use Policy can’t stop a malicious actor from forging school-branded video on a bedroom laptop and uploading it from a Starbucks Wi-Fi.
Congress, for its part, is still circling the runway. Proposals like the Deepfake Report Act (tasking DHS with trend monitoring) and the DEEPFAKES Accountability Act (creating civil remedies for victims) would be forward progress, but remain stuck in committee.
States, meanwhile, have lunged ahead, bolstered by the defeat of a proposed 10-year ban on state AI regulation. As of July 2025, 25 states have laws on the books:
Election integrity. Texas, Minnesota, California, and half a dozen others now criminalize deceptive deepfakes intended to sway voters—especially in the thirty days before an election.
Non-consensual content. California and New York impose stiff penalties for AI-generated sexual images; victims can seek statutory damages without proving emotional harm.
Consumer protection. Colorado’s sweeping Artificial Intelligence Act folds deepfakes into deceptive-trade-practices law, letting the attorney general subpoena source code if needed.
Helpful, but not enough. Some state statutes are so broad they risk First-Amendment challenges; others are too narrow. And a patchwork map means a student prank that’s illegal in Kansas City, Missouri might be perfectly lawful just across State Line Road in Kansas City, Kansas.
Technology probably won’t bail us out either. Watermarking and provenance tags (Content Credentials/C2PA) are trickling into TikTok, YouTube, and Meta products, and California’s pending AB 3211 would mandate them in campaign ads. Yet every watermark can be cropped, every hash can be re-encoded, and real-time detection still fails in adversarial tests. Think seat belts in 1968: indispensable, but no substitute for defensive driving.
Putting the AI genie back in the bottle is unfathomable. Venture and corporate funding for AI blew past $100 billion in 2024 alone, nearly doubling the prior year’s total and out-gunning every tech boom on record. That wall of cash is pouring into cheaper, more user-friendly generative tools: a criminal toolkit Trend Micro catalogued this spring advertises real-time face-swap software for $160–$200 lifetime, bundled with voice cloning from services like ElevenLabs and video puppetry from Runway—enough to let any attacker impersonate a public figure during a live Zoom. The marginal cost of forging reality is falling toward dinner-money territory, and no serious analyst thinks the capital spigot will tighten soon.
The solution is obvious … and hard
There’s only one durable antidote to a world flooded with synthetic content: a population that reflexively interrogates what it sees, hears, and reads. We need to rapidly scale efforts to develop and hone critical thinking, tailoring them for the age of AI, where a healthy skepticism is a fundamental component of safely and effectively using the internet and social media.
As Marcus Brauchli, who kindly wrote the foreword to this piece, can attest, the concept of media literacy isn’t new, and I think it’s safe to say that we largely failed as a society to prepare students for the world of decentralized content and information.
The push for “media literacy” began in the 1990s, but largely failed because it was treated as an elective enrichment rather than a core competency. Policymakers wrote aspirational standards but left districts to implement them without dedicated funding, assessments, or teacher-prep coursework, so coverage varied wildly from one classroom—or one enthusiastic librarian—to the next.
The curriculum itself aged quickly. Lessons built around print ads and cable news couldn’t keep pace with personalized newsfeeds, meme culture, and algorithmic amplification. Meanwhile, partisan skirmishes over “bias” made school leaders wary of diving in, and the absence of clear metrics meant the subject was first on the chopping block when budgets tightened. In short, media literacy lacked the institutional scaffolding—mandated seat time, professional development, and accountability indicators—that algebra or civics enjoy, so it never scaled beyond pockets of excellence.
This week’s splashy announcements underscore why we have to aim higher than tool tutorials. Microsoft and OpenAI’s new partnership with the American Federation of Teachers will pour $23 million into a National Academy for AI Instruction that hopes to train 400,000 educators, while Microsoft’s broader $4 billion Elevate Academy initiative promises to give 20 million people basic AI credentials over the next five years. Those dollars will absolutely boost “AI fluency”—and that matters. But fluency is still a point solution. Knowing which button to press in Copilot or Claude is not the same as redesigning school policies, evolving teaching and learning to build future-ready skills, updating assessment practices, and building community trust. It’s a multi-dimensional change management problem, not just a professional-development line item.
aiEDU is betting on a systems-change approach, which is why we champion “AI Readiness” as opposed to just AI literacy (the latter is a component of the former). You can read more about how we are defining AI Readiness in this framework we published last summer. Importantly, we aim to minimize disruption to schools, finding the paths of least resistance to lower the hurdles for already overwhelmed school leaders. In practice, that includes curriculum enhancements to existing high quality instructional materials that build skills and competencies aligned to the above-mentioned AI Readiness Framework. It’s also district partnerships where aiEDU facilitates the creation of AI strategies that cover not just school policies, but curriculum, professional development, stakeholder engagement, and other components outlined in our AI Readiness Rubric for Districts (pg. 12 in our AI Readiness Framework).
The most recent example of this work in action took place at San Diego Unified School District, where we facilitated the development of an AI strategy with a 60-member task force that included teachers, students, principals, and community members.
The media literacy dimension of AI Readiness isn’t valuable solely for harm mitigation—companies will demand employees who are not susceptible to being easily fooled by AI. McKinsey estimates AI-enabled fraud could cost global companies more than $500 billion annually by 2028. Insurers are already hiking premiums for firms whose employees fall for phishing that a basic provenance check would catch. Forward-leaning employers will soon screen for “AI judgment.” In other words, workforce readiness now includes the ability to spot synthetic content, audit AI outputs, and escalate anomalies. Graduates who bring those habits to the job will be trusted with higher-stakes decisions; those who don’t will be a liability. Building a society-wide immune system against everyday AI scams isn’t just a civic duty—it’s the next competitive advantage in the talent market.
Going beyond K12
Understanding synthetic media can’t stop at the schoolhouse doors. Everyone, from young people, to workers, to consumers, to decision-makers will need to leverage the right amount of healthy skepticism when navigating the internet and social media. Mayors assessing disaster footage, CEOs authorizing eight-figure wires after a voice-cloned Zoom, judges ruling on “video evidence,” parents coming across an alarming viral TikTok about a sudden outbreak of head lice at their children’s school—all now face the same verification gauntlet.
The skills we cultivate in classrooms—evaluating content’s reliability, evidence, agenda, and logic—must make their way into living rooms, city halls, boardrooms, newsrooms, and courtrooms. If the people steering our institutions can’t reliably separate fact from fabrication, the whole edifice wobbles. Equipping them to do so isn’t just risk mitigation; it’s the backbone of functional democracy and a trustworthy economy in the synthetic age.