Tomorrow is Today: The classroom I'm afraid of
There's a fundamental question about classroom design that isn't getting enough attention as we forge a path through the age of AI
The hopeful case for AI in education is that it will be the great equalizer, finally delivering personalized learning to every student regardless of ZIP code. That outcome is possible—but it’s far from guaranteed. Without deliberate investment in the human side of education, I think the opposite is more likely.
When I close my eyes and imagine what the classrooms in the best-resourced communities in America look like a decade from now, I don’t see a bunch of AI. In fact, I think when you walk into those classrooms you’ll struggle to find it at first.
You’ll see people-centered classrooms. Students writing with pen and paper. Debates. Socratic seminars. Career-connected project-based learning that empowers students to apply what they’re learning while building stuff for their community. You’ll see students developing the durable human skills like critical thinking, communication, collaboration, and creativity. The skills that will matter most in a world where AI can handle so much of the routine cognitive work.
I worry that classrooms in low-income communities won’t look like that.
A decade from now, I worry that those classrooms are chock full of every manner of AI instructor, AI tutor, AI counselor, and AI surveillance tool. In those classrooms, the AI won’t be hard to find. It will be everywhere, doing the work that humans should be doing.
The divide I’m describing is already emerging. Lecture-style instruction, as opposed to classroom discussion, was far more common at the poorest schools compared the wealthiest ones according to a recent RAND study from October 2024. Students in well-resourced schools are encouraged to debate, question, and explore. Students in under-resourced schools are trained to perform on standardized tests. Education research Martin Haberman calls this the “pedagogy of poverty”; directive instruction that emphasizes rote memorization and strict compliance is most pervasive in schools that serve our most vulnerable students.
My fear is that AI will automate and entrench this divide, not bridge it.
This is the equity crisis hiding in plain sight. Its undercurrent flows beneath any and every conversation about AI + education.
A clarification about aiEDU’s mission
I have a complex relationship with AI. People often assume, wrongly, that aiEDU’s mission is to hasten the implementation of AI into classrooms.
That’s not what we do.
Our mission is to hasten the transformation of school to ensure every student is AI-ready. There’s a crucial difference. AI may have some role in helping us get there, but it is at best second fiddle to the far more important work: investing in our teachers and educators through professional learning, coaching, and whatever other support they need to integrate learning experiences that build durable skills into the subjects they teach.
What the research actually shows
For those who are so excited about the potential of AI that they would prioritize investment in technology over the teacher workforce, I’d point to South Korea.
In 2023, South Korea announced an ambitious plan to integrate AI deeply into public education, developing “AI digital textbooks” that would personalize learning for every student. The government invested over $490 million in 2024 alone.
The plan was rolled back after just four months.
Adoption rates collapsed from 37% to 19%. Why? Because they tried to implement AI at scale without making the requisite investment in professional learning for teachers. In one teacher union survey, 98.5% of teachers reported they weren’t adequately prepared. The program was pushed through, in the words of critics, “without sufficient groundwork.”
The lesson isn’t that AI in education is bad. The lesson is that technology without teacher investment is a recipe for failure, even when your explicit goal is accelerating AI adoption. Those who want to move fast on AI should be the most concerned about building teacher capacity first.
A systematic review of intelligent tutoring systems found that while AI tutoring outperformed “business-as-usual” classes, those effects were “mitigated when compared to non-intelligent tutoring systems.” A simpler computer program, or well-designed human instruction, often achieved similar results. AI’s advantage, it turns out, diminishes when compared against high-quality human teaching.
Researchers also identified a problem with knowledge transfer. One study found that students using AI tutors showed “limited retention of skills learned with AI support—when tested on new problems without the chatbot, they struggled to transfer their knowledge.” Learners performed well when the scaffold was in place, but the learning didn’t stick without it.
This is the difference between AI as a crutch and AI as a complement. Students who rely on AI for answers may never fully internalize the content. Only a skilled teacher can navigate that difference. Only a teacher can know when to deploy AI support and when to pull it back; only a teacher can ensure that engagement translates into actual learning.
There is of course another story about AI, one where it serves as a tool to complement and empower teachers instead of replacing them.
A 2025 study by the Walton Family Foundation and Gallup found that teachers who use AI weekly save an average of 5.9 hours per week—roughly six weeks over the course of a school year. More importantly, the research shows what teachers do with that time. Fifty-five percent report that AI has given them more time to interact directly with students. They’re reinvesting the gains from AI in the human work that technology can’t replicate: mentorship, relationship-building, the emotional availability that comes from not grading papers until midnight.
One special education teacher in Michigan described getting back “an entire planning day”—time she previously had to steal from her sleep schedule. She’s not using AI to do less. She’s using it to be more present.
This is the version of AI in education I believe in: AI as a tool serving the professional, freeing teachers to do more of what only humans can do.
There’s an important catch I should note: The same research found that 50% of students feel less connected to their teachers when AI is used in the classroom. The efficiency gains only translate to better outcomes when teachers visibly reinvest that time in face-to-face interaction. When AI is used to grade faster without that reinvestment, students perceive automation as distance.
The two-tiered future I worry so much about
This is why the equity question is so urgent. Well-resourced schools will use AI to free their teachers for deeper work. Under-resourced schools without the training, coaching, and institutional support risk using it as a substitute for the human investment they’ve never received.
One researcher put it starkly: “Wealthier schools might use AI as a supplement to high-quality education, while poorer schools may rely on it as a replacement, leading to unequal educational experiences.”
Why does this happen? Because implementation capacity isn’t distributed equally. Well-funded schools can afford to train staff, hire technology specialists, and integrate AI thoughtfully. Amidst teacher shortages, budget constraints and pressure to show quick results, under-resourced schools are at risk of adopting AI without the required support to prevent ineffective or even harmful use.
Furthermore, schools under strict state monitoring due to low test scores have zero tolerance for what Michael Fullan calls the “implementation dip” that sometimes comes with pedagogical change. Any new approach comes with the risk of temporarily lowering scores, even those which lead to deeper learning in the long run. AI will be seen as the “safe” intervention, promising efficiency without the messy work of transforming instruction.
We saw this pattern during COVID. Even in countries where access to devices and internet was relatively equitable, achievement gaps between wealthy and poor students widened during remote learning.
The investment we actually need
“The next digital divide in education,” as Brookings puts it, isn’t inevitable. But closing the gap requires a frank conversation about what should be our focus over the next 5 years: we need to invest in the human side of the equation.
We need to invest in sustained, not drive-by, professional learning. In coaching that supports teachers in redesigning learning experiences around durable human skills. In giving under-resourced schools the same human capital investments that wealthy schools take for granted.
This belief undergirds all of aiEDU’s work, and over the past two years, we’ve gradually honed in on smaller, focused PD that prioritizes depth over breadth.
But I’d be lying if I said we have this figured out. The districts that most need this work are often the ones least able to pay for it. Earned revenue is growing, but we’re still wrestling with how to reach the schools that can’t write a check. That tension won’t resolve easily, and the onus will rest very much on those leaders who direct capital to ensuring that the gargantuan investment flowing into AI systems is complemented by requisite investment in the people, the educators, whose expertise and leadership will be linchpin to preparing every single student for the future.
As Martin Luther King, Jr. put it in 1967, “We are now faced with the fact that tomorrow is today. We are confronted with the fierce urgency of now.”
And in that same speech, he said, “We must rapidly begin the shift from a thing-oriented society to a person-oriented society.”



Alex overall I like the article but feel there are major components that are missing in how they truly align to implementing any program or initiative.
In my new book AI, EI, O and YOU which is grounded in learning science, research and applied directly to the concerns it raises.
The article is ultimately not about AI. It is about misalignment between technology, pedagogy, and human development. The AI, EI, O, and YOU framework gives us a way to see where that misalignment occurs and how it becomes an equity fault line.
AI: Tool or substitute?
The article correctly warns that AI is being positioned as something teachers must “use” rather than something learning experiences should be designed around. From a learning sciences perspective, this is backwards.
AI is strongest at pattern recognition, feedback, and reducing cognitive load on routine tasks. It is weakest at supporting transfer, judgment, and “making meaning” without human mediation. When AI is introduced into classrooms where teachers have not been trained as pedagogues or designers of learning, it does not augment instruction. It replaces it.
This is exactly the danger you describe in under-resourced schools: AI tutors, AI instructors, AI counselors filling the gaps left by chronic underinvestment in people. That is not personalization. That is automation of low-expectation instruction.
In well-resourced classrooms, AI will be largely invisible because it is doing what tools should do: supporting the human work without defining it. In under-resourced classrooms, AI risks becoming the most visible thing in the room because it is being asked to do work it was never designed to do.
Framework implication:
AI must sit beneath pedagogy, not in front of it. If AI is the most noticeable instructional presence, equity is already being compromised.
EI: The hidden variable in adoption failure
The article’s discussion of South Korea and teacher readiness points directly to the EI layer, even if it is not named as such.
I have worked in and with partners in SKorea where learning sciences are unequivocal: emotional safety, trust, professional agency, and belonging are prerequisites for instructional change. Teachers who feel unprepared, surveilled, or deprofessionalized will not experiment, iterate, or take the risks required for deeper learning.
The South Korea rollout failed not because AI was ineffective, but because it violated the emotional and professional conditions teachers need to integrate any new practice. That same dynamic is playing out in under-resourced schools globally, where AI is often introduced as a compliance tool rather than a support.
Students feel this too. The research you cite showing reduced student connection when AI is used without visible reinvestment in relationships is a textbook EI issue as well. When AI creates emotional distance, learning suffers, regardless of efficiency gains.
Framework implication:
Any AI initiative that does not explicitly strengthen teacher-student relationships is undermining learning, even if test scores temporarily rise.
O: Oral language, discourse, and the pedagogy of poverty
This is where the article’s warning becomes most urgent.
Haberman’s “pedagogy of poverty” is fundamentally an oral language problem. Directive instruction, rote memorization, and compliance reduce opportunities for students to speak, reason, argue, and explain. Learning sciences show that oral language is not a soft skill; it is the foundation of comprehension, critical thinking, and long-term academic success.
The classrooms you imagine in affluent communities are rich in discourse: Socratic seminars, debate, project-based learning, and authentic communication. These are oral language environments.
The classrooms you fear in under-resourced communities are silent, efficient, and AI-mediated.
That contrast is not accidental. Oral language development requires time, training, and confidence on the part of teachers. AI can support practice, but it cannot replace dialogic instruction. When AI systems reduce talk time or replace discussion with individualized screen interaction, they directly reinforce the pedagogy of poverty.
Later, as students age, O expands from oral language into occupations and purpose. Students learn to see how knowledge connects to real work, community problems, and identity. AI that narrows learning to isolated tasks severs that connection.
Framework implication:
If AI reduces discourse, debate, storytelling, or collaborative problem solving, it should be treated as an equity risk, not an instructional upgrade.
YOU: Agency, identity, and transfer
The article’s concern about knowledge transfer maps cleanly onto the YOU component.
Learning that “works” only when the AI scaffold is present is not learning that transfers. Students must develop metacognition, self-regulation, and identity as thinkers. They must know when to use tools and when to rely on themselves.
Teachers, too, must retain agency. When AI is framed as the expert and teachers as implementers, professional identity erodes. When teachers are positioned as designers who decide when to deploy and withdraw AI, learning sticks.
The two-tiered future you describe is ultimately a YOU problem. Wealthier students will learn how to think with tools. Poorer students risk being trained to perform for tools.
Framework implication:
Equity depends on who holds agency. If AI holds it, gaps widen. If students and teachers hold it, AI becomes empowering.
Pulling it together
Your article is a warning about what happens when AI is layered onto an already inequitable system without regard for how humans learn.
Applied through the AI, EI, O, and YOU framework, the path forward becomes clearer:
• AI must support, not substitute for, skilled pedagogy
• EI must be strengthened through trust, coaching, and relational reinvestment
• O must be protected and expanded through discourse-rich instruction
• YOU must remain central through agency, identity, and intentional fading of support
If we ignore any one of these layers, AI will not equalize opportunity. It will calcify it.
The question is no longer whether we can train teachers to “use AI.”
The question is whether we are willing to design AI adoption around what we already know about learning, teaching, and human development.
That choice will determine whether AI becomes a bridge or a barrier.
This resonates, especially the emphasis on coaching as developmental rather than evaluative or punitive. The Missisippi reading and math work that has been in the news of late comes to mind for me.
The caution I keep circling back to, though—and you know this well—is that transforming curriculum, assessment, and pedagogy at the same time is a dicey project. If any one of those gets too far out in front of the others, the whole thing can tip over. This is the particular challenge of AI, unique I think, from other education shifts in modern times.