About the Blog
At this year’s ASU+GSV Summit, the future of AI in education wasn’t just about tools, trends, and technical capabilities. Instead, a recurring—and welcome—theme was humanity. From the conversations sparked on the expo floor to the interviews recorded on-site, one message rang loud and clear: as AI reshapes our systems, anchoring in empathy and values is more important than ever.
Humanity Over Hype
Dustin Ramsdell, host of The Higher Ed Geek podcast, spent his time at ASU+GSV speaking with a variety of voices across the education innovation ecosystem. “Even though AI is a transformative and disruptive technology,” he shared, “now more than ever, we must anchor our work in empathy and our values.” The conversations he had weren’t just about the latest tech, but about how that tech can, and should, serve people.
That message carried into more formal sessions as well, like the conversation between Ray Lutzky, host of Mastering the Next, and Chris Agnew, Director of the Generative AI for Education Hub at Stanford’s Accelerator for Learning.
A Framework for Impact
Agnew offered a clear, three-part framework for how education leaders are currently approaching AI:
- Efficiency Gains – Saving educators time, streamlining administrative work, and freeing teachers to focus on what matters most.
- Improved Outcomes – Enhancing student learning through personalization, better assessment tools, and increased engagement.
- Reimagining Education – Rethinking the very structure of school, from daily schedules to the teacher’s role.
It’s not just about digitizing what we’ve done before—it’s about reimagining what learning could look like, with AI as a partner.
Beyond the Buzz: Tangible AI Applications
Dustin noted the increased visibility of tangible, diverse AI applications—from adaptive textbooks to workforce development platforms. These are no longer just conceptual innovations—they're live, tested, and delivering measurable value. That momentum was echoed by Agnew, who emphasized that we’re in the early stages of understanding AI’s potential but are already seeing powerful examples of teacher augmentation tools that allow educators to focus on connection, creativity, and care.
Ethical Groundwork and Equity Considerations
But Agnew didn’t shy away from the challenges. He acknowledged real concerns about academic integrity and AI misuse, noting that the research to fully understand student behavior is still developing. The goal, he suggested, should be building a spectrum of AI use—from passive automation to active learning support—and helping both students and educators build literacy around these tools.
One area where the conversation was especially forward-looking was equity. While early indicators suggest AI could be an educational equalizer, particularly in lifting up lower-performing students, Agnew remains cautious. “I’m optimistic,” he said, “but not so bullish that I think in and of itself it can be the equalizer.”
He pointed to a pressing need: developing tools that are accessible and effective for all students, not just the top 5% who are already highly motivated or resourced. That means collecting better data, testing in diverse contexts, and co-designing solutions with communities often left out of the innovation cycle.
Start Small. Start Now.
As Dustin summed up in his reflections, AI is here, and it isn’t going anywhere. Whether you're a technologist, an educator, or a policymaker, the most important thing you can do is start. Start small. Start intentionally. But start now.
Because the future of AI in education isn’t just about what we can build—it’s about who we’re building it for.