About the Episode
About the Episode:
Today, listeners get an inside look at how one institution is moving from AI experimentation to structured implementation. Mallory sits down with Joe Manok, Vice President of Advancement at Clark University and founder of GlobalPhilanthropy.ai, to unpack how his team is building seven purpose-built AI agents with governance, budgets, and human oversight built in from day one.
Rather than chasing hype around AI in higher education, Joe outlines a disciplined, ethical, and ROI-focused approach to deploying AI agents in advancement. This episode is a must-listen for enrollment marketers, advancement leaders, and higher ed innovators looking for a practical roadmap to scale impact without sacrificing trust.
Key Takeaways
- Start with the problem, not the technology. Clark identified workflow bottlenecks before selecting AI tools.
- AI agents need defined roles and governance. Each agent has a clear scope, human oversight, and ethical guardrails.
- Human relationships remain central. AI enhances infrastructure, but people make final decisions.
- Ethics and compliance come first. Clark vetted AI tools against CASE guidelines, AFP ethics codes, GDPR, and institutional values.
- ROI must be measurable. Success is defined through efficiency gains, portfolio velocity, inclusivity, and fundraising impact.
- Adoption requires strategy. Scarcity pilots and intentional onboarding improved internal buy-in.
- AI readiness matters. Clark developed an AI and ethics maturity framework to guide implementation in advancement.
Episode Summary
How Should Institutions Approach AI in Higher Education?
Joe makes it clear: don’t start with AI. Start with friction. At Clark, the journey didn’t begin with a flashy announcement about automation. It began with a simple but powerful question from the university president: Is this the best campaign we can build?
That question sparked reflection. Instead of defaulting to existing fundraising structures and tools, Joe and his team analyzed operational bottlenecks. Research reports were taking 10+ hours to compile. Internal information was buried in massive campaign toolkits. The real issue wasn’t effort—it was structural inefficiency.
This mindset shift reframed AI in higher education from a novelty to an infrastructure solution. Rather than “AI everywhere,” Clark adopted a disciplined approach—focused pilots, defined use cases, and gradual scaling.
What Do AI Agents Actually Do in Advancement?
Clark University is developing seven AI agents with human-like titles—research coordinators, stewardship officers, and more. The naming isn’t gimmicky. It clarifies scope and complexity, helping staff understand each agent’s “role” and limitations.
For example, a research agent uses deep research functionality to generate first-draft donor profiles—complete with citations and sources. Instead of staff spending hours browsing and compiling information, they receive structured drafts in minutes. A human researcher reviews, edits, and approves everything before it’s used.
With agentic workflows, the system can process hundreds or thousands of names in sequence, exporting structured documents and notifying staff when ready. Nothing goes directly to leadership or donors without human review. Relationships remain human. Infrastructure gets smarter.
How Does Clark Handle Ethics, Governance, and Risk?
Joe didn’t leave ethics as an afterthought. Clark built guardrails before expanding access. Every AI tool is vetted outside the advancement division to ensure objectivity and institutional alignment.
The team cross-referenced tools against CASE guiding principles, the Association of Fundraising Professionals’ code of ethics, GDPR, and evolving state privacy laws. AI usage is reviewed biannually by the Advancement Committee of the Board. That level of governance creates institutional confidence and reduces reputational risk.
Clark also implemented what Joe calls a “kill switch.” If a tool underperforms or raises concerns, it can be paused immediately. Transparency, documentation, and explainability are baked into the system from the start—critical elements for responsible data analytics in higher education.
How Do You Measure ROI for AI Agents?
Joe is refreshingly candid: if it’s just “cool,” it doesn’t survive. ROI must be tangible.
Clark measures success through efficiency gains (hours saved), quality improvements (depth and accuracy of research), inclusivity (expanding high-touch engagement beyond top donors), and fundraising outcomes like portfolio velocity. How quickly does a prospect move from identification to qualification? How much time is trimmed from administrative search to meaningful conversation?
The goal isn’t staff reduction—it’s scale. If research capacity once supported 50 top donors annually, AI could enable 2,500 personalized engagement experiences. That’s transformational. And it’s a powerful example of strategic AI in higher education driving measurable impact.
What’s the First Step for Leaders Who Want to Follow This Blueprint?
Joe’s advice is simple and profound: write a personal position statement on AI. Before downloading tools or launching pilots, define your values, boundaries, and comfort levels. Technology of this magnitude affects teams, students, donors, and institutional culture. Leaders need clarity before action.
From there, start small. Launch one pilot. Include skeptics on the team. Define scope. Build oversight. Establish metrics. And always have a kill switch. Don’t chase what peer institutions are doing—move at your institution’s pace.
This approach reflects a broader truth about data analytics in higher education: sustainable innovation is iterative, values-driven, and human-centered.
Connect With Our Host:
Mallory Willsea
https://www.linkedin.com/in/mallorywillsea/
https://twitter.com/mallorywillsea
Enrollify is produced by Element451 — the next-generation AI student engagement platform helping institutions create meaningful and personalized interactions with students. Learn more at element451.com.


