Discussion about this post

User's avatar
Jeff Piontek's avatar

Alex overall I like the article but feel there are major components that are missing in how they truly align to implementing any program or initiative.

In my new book AI, EI, O and YOU which is grounded in learning science, research and applied directly to the concerns it raises.

The article is ultimately not about AI. It is about misalignment between technology, pedagogy, and human development. The AI, EI, O, and YOU framework gives us a way to see where that misalignment occurs and how it becomes an equity fault line.

AI: Tool or substitute?

The article correctly warns that AI is being positioned as something teachers must “use” rather than something learning experiences should be designed around. From a learning sciences perspective, this is backwards.

AI is strongest at pattern recognition, feedback, and reducing cognitive load on routine tasks. It is weakest at supporting transfer, judgment, and “making meaning” without human mediation. When AI is introduced into classrooms where teachers have not been trained as pedagogues or designers of learning, it does not augment instruction. It replaces it.

This is exactly the danger you describe in under-resourced schools: AI tutors, AI instructors, AI counselors filling the gaps left by chronic underinvestment in people. That is not personalization. That is automation of low-expectation instruction.

In well-resourced classrooms, AI will be largely invisible because it is doing what tools should do: supporting the human work without defining it. In under-resourced classrooms, AI risks becoming the most visible thing in the room because it is being asked to do work it was never designed to do.

Framework implication:

AI must sit beneath pedagogy, not in front of it. If AI is the most noticeable instructional presence, equity is already being compromised.

EI: The hidden variable in adoption failure

The article’s discussion of South Korea and teacher readiness points directly to the EI layer, even if it is not named as such.

I have worked in and with partners in SKorea where learning sciences are unequivocal: emotional safety, trust, professional agency, and belonging are prerequisites for instructional change. Teachers who feel unprepared, surveilled, or deprofessionalized will not experiment, iterate, or take the risks required for deeper learning.

The South Korea rollout failed not because AI was ineffective, but because it violated the emotional and professional conditions teachers need to integrate any new practice. That same dynamic is playing out in under-resourced schools globally, where AI is often introduced as a compliance tool rather than a support.

Students feel this too. The research you cite showing reduced student connection when AI is used without visible reinvestment in relationships is a textbook EI issue as well. When AI creates emotional distance, learning suffers, regardless of efficiency gains.

Framework implication:

Any AI initiative that does not explicitly strengthen teacher-student relationships is undermining learning, even if test scores temporarily rise.

O: Oral language, discourse, and the pedagogy of poverty

This is where the article’s warning becomes most urgent.

Haberman’s “pedagogy of poverty” is fundamentally an oral language problem. Directive instruction, rote memorization, and compliance reduce opportunities for students to speak, reason, argue, and explain. Learning sciences show that oral language is not a soft skill; it is the foundation of comprehension, critical thinking, and long-term academic success.

The classrooms you imagine in affluent communities are rich in discourse: Socratic seminars, debate, project-based learning, and authentic communication. These are oral language environments.

The classrooms you fear in under-resourced communities are silent, efficient, and AI-mediated.

That contrast is not accidental. Oral language development requires time, training, and confidence on the part of teachers. AI can support practice, but it cannot replace dialogic instruction. When AI systems reduce talk time or replace discussion with individualized screen interaction, they directly reinforce the pedagogy of poverty.

Later, as students age, O expands from oral language into occupations and purpose. Students learn to see how knowledge connects to real work, community problems, and identity. AI that narrows learning to isolated tasks severs that connection.

Framework implication:

If AI reduces discourse, debate, storytelling, or collaborative problem solving, it should be treated as an equity risk, not an instructional upgrade.

YOU: Agency, identity, and transfer

The article’s concern about knowledge transfer maps cleanly onto the YOU component.

Learning that “works” only when the AI scaffold is present is not learning that transfers. Students must develop metacognition, self-regulation, and identity as thinkers. They must know when to use tools and when to rely on themselves.

Teachers, too, must retain agency. When AI is framed as the expert and teachers as implementers, professional identity erodes. When teachers are positioned as designers who decide when to deploy and withdraw AI, learning sticks.

The two-tiered future you describe is ultimately a YOU problem. Wealthier students will learn how to think with tools. Poorer students risk being trained to perform for tools.

Framework implication:

Equity depends on who holds agency. If AI holds it, gaps widen. If students and teachers hold it, AI becomes empowering.

Pulling it together

Your article is a warning about what happens when AI is layered onto an already inequitable system without regard for how humans learn.

Applied through the AI, EI, O, and YOU framework, the path forward becomes clearer:

• AI must support, not substitute for, skilled pedagogy

• EI must be strengthened through trust, coaching, and relational reinvestment

• O must be protected and expanded through discourse-rich instruction

• YOU must remain central through agency, identity, and intentional fading of support

If we ignore any one of these layers, AI will not equalize opportunity. It will calcify it.

The question is no longer whether we can train teachers to “use AI.”

The question is whether we are willing to design AI adoption around what we already know about learning, teaching, and human development.

That choice will determine whether AI becomes a bridge or a barrier.

Brad Knight's avatar

This resonates, especially the emphasis on coaching as developmental rather than evaluative or punitive. The Missisippi reading and math work that has been in the news of late comes to mind for me.

The caution I keep circling back to, though—and you know this well—is that transforming curriculum, assessment, and pedagogy at the same time is a dicey project. If any one of those gets too far out in front of the others, the whole thing can tip over. This is the particular challenge of AI, unique I think, from other education shifts in modern times.

3 more comments...

No posts

Ready for more?