About the Blog
At the Engage Summit in Charlotte, a special bonus episode of Higher Ed Pulse featured a powerful conversation between host Mallory Willsea and Stella Liu, Lead Data Scientist at Arizona State University. What unfolded was not just a story of AI innovation—it was a wake-up call.
Stella’s career spans from algorithm-driven delivery systems at Carvana to culture-shaping AI policies at ASU. Her transition from tech to higher ed has been marked by one constant: solve real problems, then find the right technology to make the solution scale. Now, she’s applying that mindset to one of the most urgent challenges in education—making AI ethical, accessible, and accountable.
Why Ethical AI Is a Moral Imperative in Higher Ed
In industries like retail or logistics, a flawed algorithm might inconvenience a customer. In higher ed, the same flaw can derail a student’s future. That’s the stakes-based argument Stella lays out. She challenges leaders to move beyond AI buzzwords and ground their efforts in real evaluation: not just whether a model works, but whether it works fairly and safely.
Accuracy alone isn’t enough. Ethical AI must pass rigorous tests for:
- Bias: Are marginalized groups treated equitably?
- Safety: Could incorrect outputs cause harm?
- Transparency: Can humans understand and explain the results?
In Liu’s words, ethical AI is not a bonus feature—it’s a “moral obligation.”
Inside ASU’s Ethical AI Framework
Stella and her team at ASU have built one of the most comprehensive ethical AI evaluation systems in higher ed. The three-part framework includes:
- Ethical AI Engine – Automatically tests models for fairness, accuracy, and bias before they’re deployed.
- Guard – A real-time monitoring system that alerts teams to ethical red flags as they arise.
- Safer – A data analytics tool that audits AI-student interactions to ensure compliance and integrity.
This multi-layered approach is a model other institutions can follow—even if they don’t have ASU’s scale.
Don’t Have a Big AI Team? You Can Still Start.
Stella’s message to smaller institutions was clear: You don’t need a team of engineers to begin practicing responsible AI. Before ASU’s current system existed, her team performed manual evaluations. They created test datasets, collaborated with real users, and hand-scored AI responses for quality and fairness.
She also suggests A/B testing as a low-lift, high-impact method. By comparing outcomes from AI-exposed groups to a control group, schools can gain measurable insights into impact—no proprietary tools required.
What Higher Ed Can Learn from Industry
At Carvana, Stella optimized delivery times to improve customer experience. That might seem far from academic advising or admissions, but the core principle is the same: start with the real need. As she puts it, "AI should be a solution to a real problem—not a buzzword thrown into a tech stack without purpose."
Her advice for higher ed leaders? Build tools that serve students, not spreadsheets. Test everything. And never assume that just because something is “smart,” it’s also safe.
A Final Word: Evolve Responsibly
AI is changing the way institutions engage, enroll, and support students. But without ethical frameworks, those changes can reinforce inequity instead of dismantling it.
Stella Liu’s story is a masterclass in what’s possible when institutions take AI seriously, not just as a tool, but as a responsibility. For anyone working in enrollment, data science, or digital transformation, one takeaway is clear: Evolve responsibly.