About the Episode
The AI Engage Summit is a free, virtual experience built to help higher ed leaders actually do something with AI. Over two afternoons, you’ll hear from peers, see practical demos, and walk away with ideas you can use immediately — no travel required, no cost to attend. If you’re ready to move AI from “interesting” to “impactful,” this is the place to be. Register now, don't miss out.
About The Episode:
Carrie Phillips sits down with Irene Borys from Augustana College to explore the emotional and cultural dynamics of AI adoption in higher education. Rather than focusing on the tech itself, they delve into the deeper fears, questions, and resistance that AI stirs up — and how higher ed leaders can guide their institutions through this change with empathy, intentionality, and strategy. This conversation is a must-listen for anyone navigating digital transformation in academia.
Key Takeaways
- Higher education approaches AI adoption more intentionally than the corporate world, emphasizing ethical implications and long-term impacts on learning.
- Fear and resistance to AI often stem from a place of deep care for students, teaching, and the craft of learning — not from reluctance to innovate.
- Clear guardrails and purpose-built tools, like branded GPTs, help increase user trust and confidence in AI technologies.
- Students are also navigating AI uncertainty and ethics, providing an opportunity for shared learning and dialogue across campus.
- Successful AI implementation depends on leadership that listens first and asks better questions — not just “how can we use AI?” but “where would support help you do your best work?”
- Thoughtful change management, grounded in mission alignment, is key to navigating digital transformation in a human-centered way.
Episode Summary
What makes the higher ed response to AI different from corporate settings?
Unlike the fast-paced, efficiency-first adoption of AI in the corporate world, higher education institutions are approaching AI with a deep sense of intentionality. As Irene Borys shares, her transition from agency life to academia revealed a more philosophical and values-driven culture — one where stakeholders across campus, from faculty to students, ask critical questions about authorship, learning integrity, and the future of their craft. Higher ed doesn’t rush into new tools; it pauses to consider long-term implications, aligning adoption with mission and educational purpose.
This reflective approach isn’t a liability — it’s a strength. At Augustana, cross-campus working groups have been formed to collectively explore AI’s potential, from both a policy and practical standpoint. The result? Clear, collaborative statements on AI use and structured guidance for implementation. The goal is not simply to move fast but to move forward with care and clarity.
How can leaders reframe AI skepticism and fear on campus?
Many higher ed professionals interpret hesitation around AI as fear or resistance — but Irene urges us to look deeper. More often than not, those reactions come from a place of care. Faculty and staff are deeply invested in student learning outcomes, academic integrity, and the value of human-centered education. When AI is perceived as a threat to those values, hesitation is natural.
The key, then, is reframing — seeing skepticism not as opposition but as an invitation to engage. Irene emphasizes the importance of setting “clear sandboxes,” or well-defined boundaries, for AI experimentation. At Augustana, branded GPTs were introduced not as replacements but as tools for alignment and support — offering feedback and suggestions while respecting the human touch. When people understand the purpose and limitations of a tool, trust grows. And when they see it as a “thought partner” rather than a threat, real transformation begins.
How are students reacting to AI, and what does that mean for institutions?
It’s easy to assume students are all-in on AI — digital natives who are eager to adopt whatever’s new. But that’s not the whole story. According to Irene, students are just as conflicted and curious as faculty and staff. Some are excited and experimenting. Others are deeply concerned about ethics, fairness, and what AI means for the learning process.
This shared grappling opens the door for dialogue — and positions institutions as co-learners rather than top-down enforcers. Irene highlights the importance of guiding students not only in how to use AI but also in how to think about AI. That means giving them frameworks for critical engagement, not just tools for productivity. It’s not just about skills — it’s about shaping thoughtful, responsible digital citizens.
What does practical AI adoption look like on a college campus?
One of the most impactful tools Irene’s team has introduced is the branded GPT — a customized AI assistant trained on Augustana’s brand voice, strategic priorities, and communications data. Rather than pumping out generic content, it helps campus communicators evaluate and align their messaging. It doesn’t replace their work; it enhances it.
What’s more, the tool’s design — limited in scope and transparent in output — fosters trust. Irene notes that the most meaningful feedback came from admissions staff who said the tool boosted their confidence in communication. That’s the win: not just adoption, but empowerment.
The larger point? Tools only work when they’re accompanied by thoughtfulness and structure. And successful AI adoption doesn’t start with “here’s the latest tech” — it starts with “how can we support your goals better?”
What advice do you have for higher ed leaders just starting their AI journey?
Irene’s advice is refreshingly human: start by asking better questions. Don’t dump AI tools on your team and expect them to adapt. Instead, ask: Where do you need support? What tasks could benefit from a smarter assistant? Let those needs guide your exploration. That’s how trust is built — through listening, experimenting together, and iterating based on real feedback.
She also highlights the importance of aligning with mission. When you’re clear on your institutional purpose — preparing students for thoughtful, impactful lives — you’re better equipped to evaluate how AI fits into that vision. Tools are temporary; values are enduring.
And finally, remember that adoption takes time. Higher ed isn’t built for six-week sprints. But that slower pace can be a strength when it allows for more inclusive, ethical, and human-centered innovation.
Enrollify is produced by Element451 — the next-generation AI student engagement platform helping institutions create meaningful and personalized interactions with students. Learn more at element451.com.


