We’re back!
Season Two of explAInED opens with Michael Berry, Shaun Langevin, and Renee Langevin diving into the realities of AI in education. Michael shares how he’s using AI to translate global frameworks into consistent standards and proficiency scales, while the group reflects on the challenges of time, teacher perceptions, and the often-confusing EdTech landscape when trying to incorporate AI use into the curriculum.
The team also unveils the work they are doing outside of the podcast. Michael and Renee share how the explAInED team are holding monthly virtual webinars for parents and caregivers on all topics surrounding AI use. These sessions are recorded for broader access and will continue throughout the school year. The team also highlights their upcoming partnership with the Vermont Higher Education Collaborative, in which they will offer virtual workshops this fall for both work-based learning educators and school leaders seeking to design AI guidelines within existing policies.
Episode Title: What Is AI, Really? 
Episode Summary
In this special Caregiver Edition of the explAInED podcast, Renee Langevin, Michael Berry, and Shaun Langevin sit down to talk directly with parents and caregivers about one of the biggest questions in education right now: What is AI, really?
With humor, honesty, and plenty of real-world examples, the team explores what AI is (and isn’t), how it shows up in everyday life, and what families should know about its risks and possibilities. From toothbrush thought experiments to concerns about privacy, bias, and mental health, this episode focuses on awareness. The message: kids need strong human superpowers like empathy, adaptability, creativity, and critical thinking to thrive in an AI world, and parents can play a vital role by learning alongside them.
Whether you’re curious, skeptical, or just overwhelmed by all the AI talk, this conversation offers a parent-friendly, no-jargon guide to starting thoughtful conversations at home.
Episode Summary:
In this episode, the explAInED team is joined by Dr. Adam Phyall, Director of Professional Learning and Leadership at Future Ready Schools and co-host of the Undisrupted podcast. A former high school science teacher turned instructional technology leader and longtime advocate for equity-driven innovation, Adam shares how his journey into edtech began with a love for creative media, and how it continues to center joy, purpose, and student voice.
Adam describes AI as “this generation’s internet,” a transformative force that will amplify whatever practices and mindsets are already in place. When used with care, AI can extend student creativity, deepen inquiry, and support more responsive teaching. But when misused or rushed, it risks amplifying disengagement and inequity.
Adam also stresses that successful AI integration starts with people. Schools must involve students, families, and educators in the process, not just as stakeholders, but as co-designers. From his own kids’ summer writing projects powered by AI image generators to district-wide planning conversations, he models how to keep humans at the center of technology decisions. Adam shows us that with the right mindset and support educators can lead this moment with confidence, and even a little fun.
Mentions & Resources
Undisrupted Podcast with Adam Phyall and Carl Hooker
In this episode, we welcome Al Kingsley MBE, self-described “man of two hats”, combining over 30 years as an edtech CEO with decades chairing multi-academy trusts and advising the UK Department for Education. Al brings a grounded perspective: AI is “just a tool”. Its impact depends on clear purpose, context, and thoughtful use.
Reflecting on past edtech missteps, Al highlights how tools often fail when built without educator input or a clear “why.” He advocates for co-produced solutions developed with, not for, schools, tools that support pedagogy, not distract from it.
Rather than centering AI, Al focuses on what makes us human: collaboration, creativity, and critical thinking. He calls for a shift from high-stakes exams to project-based learning aligned with SEL and UDL, arguing that future success hinges on “soft” skills, not test scores.
Lasting change, he notes, requires sustained funding, educator agency, and a new definition of success rooted in human flourishing. Al closes with a reminder that educators are stronger together, and their collective voice can shape systems that truly serve learners.
Mentions and Resources
The Secret Edtech Diary and The Awkward Questions in Education by Al Kingsley
In this episode of explAInED, the team welcomes Miriam Reynoldson, a Melbourne-based learning designer, educator, and PhD researcher whose interdisciplinary work blends education, sociology, and philosophy. Known for her thoughtful critique of generative AI in education, Miriam discusses why she chose to co-author an open letter refusing the mandated use of AI tools in teaching, and how her own experience with institutional AI pilots pushed her to speak out.
Miriam shares how her journey has shaped her view that autonomy and informed consent are critical when it comes to technology in education. The conversation explores how personal choice can be mistakenly interpreted as moral judgment, highlighting the importance of systems that foster nuanced, respectful engagement with AI instead of enforcing one-size-fits-all mandates.
The episode also explores what it means to push back against the “myth of inevitability,” offering a hopeful path forward grounded in collective solidarity and the empowerment of educators. From curriculum design to system-level thinking, Miriam and the hosts reflect on the importance of making space for both critical resistance and, for those who choose it, careful, intentional use of AI.
Resources & Mentions:
An open letter from educators who refuse to adopt GenAI in education
Janelle Shane, You Look Like a Thing and I Love You
Mark Watkins - AI is Unavoidable, Not Inevitable
Leon Furze - The Myth of Inevitable AI
Learn more about Miriam’s work at themindfile.substack.com (now moving to https://miriamreynoldson.com).
In this episode of explAInED, we sit down with Dr. Sara Baker, AI strategist, education advisor, and founder of VBI Education, for a thoughtful conversation about how schools can move from confusion to clarity when it comes to AI. Drawing on her decades of experience in education and edtech, Dr. Baker shares how her personal and professional journey, including launching teen AI camps and frameworks for educators, led her to develop a calm, strategic approach to AI integration.
Dr. Baker emphasizes the need for systems-level thinking, helping educators understand what AI is actually good for, how to assess tools critically, and how to put them to work in meaningful, iterative ways. She introduces her MAP and EERE frameworks as approachable entry points for educators at any level of tech comfort, grounded in the idea that AI can support, not replace, good teaching. Throughout the conversation, she challenges the logic of banning AI in schools, arguing that such decisions not only create inequities for students but also delay the deeper work of preparing them for a future shaped by these tools.
Together, the hosts and Dr. Baker discuss the danger of reactive decision-making and the overwhelm of "one more tech thing," advocating instead for thoughtful policies, support structures, and change management strategies that center teacher and student agency. As districts prepare for a new school year, this episode offers a timely reminder for schools approaching AI integration: It doesn't need to be perfect, but it does need to be intentional and educators deserve better than to face it alone.
In this episode of explAInED, co-hosts Michael Berry, Renee Langevin, and Shaun Langevin examine the rapidly evolving landscape of AI in education. They explore Google’s surprise rollout of Gemini and Notebook LM to K–12 institutions, shifts in U.S. federal education funding, and the growing influence of big tech companies in shaping national AI professional learning efforts.
The conversation focuses on what these developments mean for Vermont schools and educators nationwide, raising critical questions about student privacy, equity, urgency, and the structure of public education itself. While the hosts highlight the risks and uncertainties, they also reflect on the potential of tools like Notebook LM to support inclusive learning, intervention strategies, and educator growth, when implemented with care and intention.
Striking a balance between caution and optimism, the episode offers practical insights for school systems working to respond thoughtfully to the pace of change without losing sight of their core values: student wellbeing, safety, and meaningful learning.
Victoria Hedlund, researcher, lecturer at Goldsmiths, University of London, founder of GenEd Labs.ai, and widely known as The Bias Girl, joins the explAInED podcast for a deep dive into the intersection of AI, equity, and education. Together, we explore how generative AI tools can both empower learners and risk deepening systemic inequities if not implemented thoughtfully.
Victoria shares insights on how schools with fewer resources or limited digital literacy may struggle to adopt AI tools equitably, especially as free versions of popular models often produce less bias-aware responses than their paid counterparts. The conversation examines how financial and systemic disparities between schools can shape students’ experiences with AI, and what can be done to mitigate those risks.
A highlight of the discussion is the Lesson Inspector, a tool Victoria is developing to help teachers critically analyze lesson plans through a lens of equity, neuro-affirming practices, and Universal Design for Learning (UDL). Victoria also challenges educators to consider their evolving “GenAI identity”, reflecting on how their choice of tools aligns with their teaching values and practice.
Throughout the episode, Victoria offers practical strategies for identifying and addressing bias in AI-generated content, from crafting precise prompts to fostering collaborative reflection among educators. She emphasizes the importance of leadership in creating supportive spaces where teachers can experiment with AI and share learnings as a community.
Listeners of this episode (and beyond!) are encouraged to think critically about the role of AI in education and the collective responsibility to ensure its ethical, inclusive use.
Resources and mentions
Universal Design for Learning (UDL)
In this episode, hosts Michael Berry, Renee Langevin, and Shaun Langevin talk with Leon Furze, a writer, researcher, former English teacher, and creator of the AI Assessment Scale. They explore how generative AI is reshaping education. Leon urges educators to move beyond vague "AI" labels and instead focus on specific tools and impacts. He contrasts Australia’s national Framework for Generative AI in Schools with the fragmented, vendor-driven approach in the U.S., warning that outsourcing professional learning to big tech risks eroding teacher expertise and increasing attrition.
Leon introduces his "digital plastic" metaphor, comparing AI-generated content to synthetic plastic. Like plastic, AI can be innovative but also carries environmental and ethical costs that are often pushed onto individuals instead of addressed through systemic regulation. He highlights the need for schools to go beyond surface-level AI literacy and engage students in meaningful conversations about bias, data extraction, and environmental impact.
On assessment, Leon explains how the widely-used AI Assessment Scale, which he co-authored, helps educators design learning that intentionally balances AI-supported and AI-free tasks. This approach encourages authentic, thoughtful use of the technology.
Leon highlights how AI is enhancing inclusion through assistive tools like speech-to-text powered by OpenAI Whisper and Be My Eyes, which uses GPT-4 with camera glasses to describe surroundings for blind users. He also notes that neurodivergent students are using AI-based planners to help manage cognitive load and stay organized, showing AI’s potential to support diverse learner needs.
This rich conversation reinforces the need for educators to lead ethical AI integration and to design learning experiences that truly serve students.
Mentions and Resources
AI Assessment Scale (Leon Furze)
Teaching AI Ethics series (Leon Furze)
Australian Framework for Generative AI in Schools
Content Authenticity Initiative
Episode Summary
On our first live taping of explAInED, we are on location at Hula Lakeside for the aiVermont 2nd Annual Education and AI Summit. This episode features three rich conversations about the evolving role of artificial intelligence in Vermont’s schools and communities. We begin with Burlington High School’s ethics debate team students Iris and Mac, who share thoughtful perspectives on AI as a force for change, its ethical challenges, and the need for teachers to guide rather than police its use. Next, we talk with the founders of aiVermont, Chris Thompson, Denise Shekerjian, and Marc Natanagara about their journey in building statewide AI literacy, their focus on working through teachers to reach communities, and how Vermont is setting an example in ethical AI conversations. Finally, we hear from Josiah Raiche, Vermont’s Chief Data and AI Officer, and Josh Blumberg, Educational Technology Programs Manager at the Vermont Agency of Education, as they discuss Vermont’s AI strategy, efforts to support schools with policies and guidance, and the importance of transparency, responsibility, and human-centered approaches in using AI tools.
Whether you’re an educator, policymaker, parent, or student, now is the time to engage in shaping how AI is understood and used in schools. Explore how you can help build AI literacy in your community, advocate for thoughtful policies, and support ethical, human-centered AI integration.
Resources
aiVermont – https://aivermont.org
Artificial Intelligence in Vermont – https://digitalservices.vermont.gov/ai
Michelle Ament, co-founder of the Human Intelligence Movement, joins explAInED to discuss how we can keep humans at the center in an AI-driven world. She shares the movement’s framework focused on three essential domains of human intelligence: EQ (emotional), IQ (cognitive), and AQ (adaptability), with a strong emphasis on adaptability as the most underdeveloped but critical skill for the AI era.
Michelle calls for a shift from assessment of learning to assessment FOR learning, emphasizing the need to measure skills like collaboration, resilience, and communication. She urges leaders to prioritize curiosity rather than compliance, asking not “What can AI do?” but “How does this build connection, thinking, and curiosity?”
Throughout the episode, Michelle stresses the urgency of building cross-sector momentum for human-centered education. It's not enough to tweak systems from within; we need a broader societal conversation about how to elevate human intelligence alongside artificial intelligence.
Listen in to discover how reframing our approach to AI begins with reclaiming our commitment to human growth, connection, and agency. Join The Human Intelligence Movement and be part of a growing community committed to ensuring that in the age of AI, what makes us human remains at the center of how we teach, lead, and learn.
Resources
The Human Intelligence Movement
Champlain College President Alex Hernandez joins the explAInED podcast for a wide-ranging conversation on preparing students for a world increasingly shaped by artificial intelligence. From AI access and workforce readiness to ethics and creativity, Hernandez offers a grounded, human-centered perspective on higher education’s role in navigating technological transformation.
We discuss Champlain's pioneering partnership with Anthropic to bring Claude to campus, the creation of faculty fellowships to experiment with AI in the classroom, and the ways students are responding, from anxiety about cheating, to excitement over startups. Hernandez emphasizes the critical importance of combining AI fluency with human skills like collaboration, communication, and critical thinking, especially in a future where students must be more than just tech-literate. Champlain’s approach is clear: give students access, space, and the chance to engage deeply with technology and each other.
Episode plus: A surprise pitch for a new Bigfoot-themed mascot. Former guests, any guesses whose idea that was?
Makiri Pugh joins explAInED with a powerful case for bringing systems thinking to AI in education. Makiri reframes AI as an opportunity to scale wisdom, not offload judgment. He advocates for educator-designed AI that reflects local values and context, and shares how district-owned large language models can help personalize learning while reinforcing culture and priorities.
Makiri explains why lesson planning is the best on-ramp for human-centered AI use. When done well, it frees teachers to focus on pedagogy and relationships. He also explores how AI can support new teachers through scripting and rationale, reduce burnout, and improve retention across systems.
The conversation turns to the importance of grounding AI adoption in ethics and inclusion. Makiri urges districts to create AI ethics and implementation committees, include skeptics in the process, and foster open conversations about risk and responsibility. With the pace of AI accelerating, he cautions against traditional six-month pilot mindsets and encourages rapid iteration aligned to clearly defined needs.
Makiri challenges us to bring AI use into the open. As he puts it, the future is here—what matters is who’s shaping it.
Mentions and Resources:
Episode Summary
“AI is not replacing cognition, it’s extending it.” Our latest guest, Patrick Dempsey, educational thought leader and Director of Teaching and Learning at Loyola University Maryland reframes the common belief that AI is disrupting education by arguing that it’s simply revealing long-standing systemic cracks. Rather than resisting the change, he suggests it’s a moment for reflection, reimagination, and strategic growth.
We also discuss the critical difference between AI literacy and AI fluency, with Patrick emphasizing that true fluency comes when educators use AI to amplify good pedagogy. He shares strategies for working with resistant educators, including the “awareness ladder” approach and how marketing funnels can actually serve as tools for mindset shifts.
At the student level, Patrick reflects on how learners are already blending tools and workflows in ways we might initially dismiss, but which actually reflect deeper cognitive engagement. He calls for reimagining assessment by identifying the process of learning as students develop new ways of thinking, creating, and problem-solving with AI. In a time of rapid change and real uncertainty, Patrick reminds us that the goal should be to understand how AI mirrors, amplifies, and challenges our deepest assumptions about learning.
Mentions and Resources
Patrick Dempsey | Loyola University
Patrick Dempsey | The Second Draft (Substack)
AI Policy vs. AI Literacy | Patrick Dempsey LinkedIn
This week we’re joined by Michael Morrison, CTO of Laguna Beach Unified School District, where AI is not just a tool but a catalyst for trust, creativity, and human connection. Morrison shares the story behind AI Trust You, a free Google extension that helps students disclose how they’ve used AI in their work, while giving teachers clear, customizable guidance. What began as a response to a culture of guilt and suspicion around AI use quickly grew to over 148,000 installs in just three months, replacing "gotcha" culture with shared language and mutual respect.
Morrison also reflects on using AI as a thought partner—not just to boost productivity, but to deepen empathy and creativity. From staff “lunch and learns” to third-grade AI art projects and critically analyzing bias in AI-generated units, LBUSD models what it means to put human connection at the center of innovation.
If you’re exploring thoughtful, relational ways to bring AI into schools, this episode offers a reminder that meaningful change starts with curiosity, play, and putting people—not technology—at the center.
Mentions and Resources:
AI Trust You: Free AI use disclosure tool for students and teachers
Episode Summary:
In the latest episode of explAInED, we’re joined by Giselle Fuerte - adult learning architect, youth advocate, and founder of the interactive digital program, Being Human with AI. Giselle shares how she’s turning her focus to a critical group in the AI conversation: middle school students.
Giselle is helping kids and the adults around them understand what AI is, how it works, and how to stay safe and empowered while using it. She makes the case that AI is “smart-dumb” (a tool that mimics us at scale but can’t think or feel) and warns of the psychological and social dangers that come when kids confuse a chatbot’s persuasive output with emotional reality.
Giselle offers practical, accessible ways to demystify AI for learners of all ages, from teaching kids to spot manipulative engagement tactics and bias, to reframing AI as a co-creative tool, not a wizard behind the curtain. Join us for this engaging episode!
Episode Summary:
In this episode of explAInED, we are thrilled to welcome back Adam Sparks, co-founder and CEO of Short Answer to discuss how AI is reshaping the landscape of writing instruction in K-12 education. Sparks explores the dual impact of AI — its potential to both elevate critical thinking and unintentionally homogenize student voices. He also tells us about some recent and upcoming AI infused features in Short Answer, including Quick Write, which is designed to provide targeted, AI-driven feedback on student writing without sacrificing individual expression.
The conversation expands to the challenges of integrating writing practice across all subject areas, including those where teachers may be hesitant to incorporate it. Sparks underscores the importance of making writing instruction accessible and relevant across disciplines as AI continues to permeate classrooms.
We also examine the evolving concept of “AI literacy,” with Sparks advocating for a focus on foundational knowledge and skills rather than standalone AI competencies. Co-host Mike Berry proposes a shift to the term “AI informed” to more accurately convey the goal of fostering understanding without implying expertise.
Tune in to explore how we can leverage AI to enhance student learning while preserving the unique voices that make writing so impactful.
Mentions and Resources:
Study from Cornell University: Poor Alignment and Steerability of Large Language Models: Evidence from College Admission Essays by Jinsook Lee, AJ Alvero, Thorsten Joachims, René Kizilcec
Episode Summary:
In this reflective episode, hosts Michael Berry, Shaun Langevin, and Renee Langevin revisit key moments from guests of the podcast, delving into themes that have shaped their conversations around AI, education, and lived experiences. The episode also touches on the hosts' desire for more reflection time, the significance of diverse guest voices, and the persistent issue of AI bias and the lack of teacher input in educational AI development.
Episode Summary:
In this episode of explAInED, we’re joined by Kunal Dalal—former teacher, principal, startup founder, and current Microschool Strategist, Agentic Parentologist, and AI strategist with the Orange County Department of Education. Kunal shares how his journey from alternative schools to edtech leadership has shaped a human-centered vision of what AI could mean for learning, parenting, and the future of schools.
Kunal shares a deeply human vision for education’s future—one rooted in community, agency, and reflection. He urges us not to waste this moment by automating what's already broken - that the real promise of AI is in scaling personalization, re-centering the sacred relationship between learner and teacher (or parent), and inviting students to co-learn and co-lead.
From dream journaling with his child to launching a 600-student AI summit, Kunal shows how generative AI—when used in community—can be a tool for connection, agency, and joy.
Student AI Convening – A 600-student event in Orange County, CA focused entirely on student voice and AI
Resources & Mentions:
Micro-school strategy – A call to edtech companies to support scalable, student-centered personalization
Episode Summary:
In this episode of explAInED, we talk with Dr. Torrey Trust, professor of learning technology at UMass Amherst, about how educators can approach AI with clarity and intention. She challenges the term “artificial intelligence” itself, arguing it misleads students into overestimating what these tools can do.
Dr. Trust shares how simulation-based learning—like choose-your-own-adventure PD—can help educators engage meaningfully with AI. She also cautions against overreliance on chatbots in K–12 classrooms, noting the importance of guided, critical use. Many of her students arrive fearful or uncertain about technology, and she emphasizes that resisting AI entirely only widens the digital divide.
Throughout, Dr. Trust advocates for a balanced, informed approach that supports creativity, encourages thoughtful decision-making, and ensures students are prepared for a future where AI is everywhere—whether we’re ready or not.
Assigning AI: Seven Approaches for Students, with Prompts by Ethan R. Mollick and Lilach Mollick
Environmental cost of AI — raising awareness about resource consumption in casual usage
Turnitin and AI detection — historical parallels in student fear and surveillance
Resources & Mentions: