About the Episode
About The Episode:
Mastering AI tools and techniques takes time. In this episode, we share how community colleges can become the region’s trusted AI onramp by offering accessible, mission-aligned AI literacy programming for students, employees, and community members. We’ll break down how to approach AI as a critical mindset rather than a technical skill set.
Learn about the values behind our AI model and the practical decisions that make it work. We’ll address the tensions and uncomfortable realities driving this work, including AI overreliance, academic integrity, and workforce disruption. Finally, we'll examine how AI literacy can support college goals and leave you with a guide to building an onramp for your community.
Find out more about what Finger Lakes Community College is doing with AI at the FLX AI Hub.
Sources:
HBR - AI-Generated “Workslop” Is Destroying Productivity
https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity
MIT - The GenAI Divide STATE OF AI IN BUSINESS 2025 https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf
Key Takeaways
- AI literacy is now a workforce imperative. Employers increasingly expect graduates to understand and responsibly use AI tools.
- Community colleges are uniquely equipped to lead AI access initiatives because of their open-access, mission-driven models.
- Human-led AI matters. FLCC emphasizes critical thinking, discernment, and ethical decision-making over blind automation.
- “You can’t AI wrong.” Creating a low-stakes environment for experimentation encourages broader adoption and innovation.
- Leadership should create guardrails — not roadblocks. Institutions need flexible policies that support innovation without stifling curiosity.
- AI adoption succeeds when institutions support both top-down leadership and bottom-up experimentation.
- Prompt engineering and AI judgment are teachable skills that become transferable across disciplines and industries.
- Curiosity is a critical leadership skill in the AI era.
- AI in higher education isn’t just about technology — it’s about culture change.
- Mission-driven AI work strengthens student success strategies and workforce development efforts.
AI Literacy in Higher Education Is No Longer Optional
When generative AI exploded into public consciousness, many colleges found themselves reacting in real time. Some rushed to ban tools like ChatGPT. Others leaned fully into experimentation. At FLCC, leaders chose a different path: create an intentional, accessible on-ramp that empowers people to explore AI responsibly.
According to Debora Ortloff, the urgency surrounding AI came from every direction at once — students, faculty, employers, and community partners alike. Industry leaders were asking critical questions about workforce preparedness, while educators were simultaneously excited and apprehensive about what AI could mean for teaching and learning. That tension became the catalyst for FLCC’s AI initiative.
Dave Ghidiu described AI literacy as fundamentally different from previous technology shifts because AI is “omnidirectional.” Unlike earlier digital tools that served singular purposes, AI can impact virtually every discipline and profession. That reality changes the stakes for institutions thinking about student success strategies and workforce development.
What makes FLCC’s approach particularly compelling is that the institution refused to frame AI literacy as merely technical training. Instead, the college focused on helping learners build discernment, confidence, and critical thinking skills — capabilities that remain essential no matter how quickly technology evolves.
The “AI On-Ramp” Model: Making AI in Higher Education Accessible
One of the most powerful concepts discussed in the episode is FLCC’s “AI on-ramp” framework. At its core, the model is rooted in a deeply community college-oriented philosophy: it’s never too late to start learning.
Ortloff emphasized that community colleges are uniquely positioned to lead AI access initiatives because openness and accessibility are already foundational to their mission. The AI on-ramp is designed to meet people wherever they are — whether they’re enthusiastic early adopters or deeply skeptical beginners.
Rather than overwhelming faculty and staff with technical jargon, FLCC intentionally built a culture of curiosity and experimentation. Ghidiu repeatedly reinforced a key message that has become central to their approach: “You can’t AI wrong.” That simple mindset shift lowers barriers to entry and encourages people to engage without fear of failure.
This strategy matters because one of the biggest obstacles to AI adoption in higher education isn’t technical skill — it’s psychological resistance. Many professionals assume AI is too complicated or too advanced for them. FLCC’s approach dismantles that assumption by creating approachable learning experiences that prioritize exploration over perfection.
Why Human-Led AI Matters More Than Ever
One of the most refreshing aspects of this conversation is the consistent emphasis on human leadership in AI deployment. While many conversations about AI in higher education focus heavily on tools and automation, FLCC’s framework centers the human experience first.
Ortloff described AI as part of an “active inquiry” process. Humans bring the questions, the context, the expertise, and the judgment. AI simply becomes a collaborator in the process — not the decision-maker. This philosophy reinforces one of FLCC’s guiding principles: co-create, don’t abdicate.
Ghidiu shared several memorable guardrails that shape their AI literacy training, including:
- “Co-create, don’t abdicate.”
- “No PII in the sky.”
- “The bias hides inside — keep eyes wide.”
- “If you don’t review, the error is on you.”
These principles reinforce a critical truth about AI adoption: institutions cannot outsource responsibility to technology. Ethical decision-making, discernment, and accountability still belong to humans. That’s particularly important in higher education environments where trust, equity, and student success remain central priorities.
For institutions exploring AI in higher education, this human-led framework provides a valuable counterbalance to the hype cycle surrounding generative AI tools.
Building a Culture of AI Experimentation on Campus
Creating sustainable AI adoption requires more than workshops and webinars — it requires culture change. FLCC’s leaders openly acknowledged that institutional bureaucracy can easily become one of the biggest barriers to innovation.
Rather than over-regulating AI from the beginning, FLCC chose to support experimentation through what they call “Mavericks and Mavens.” Mavericks are the early adopters willing to test emerging tools, while Mavens become the internal experts who help scale successful practices across campus.
This grassroots approach has already produced meaningful outcomes. Faculty and staff members who initially attended introductory sessions are now leading workshops and teaching others how to use AI tools like NotebookLM. Those peer-driven learning communities are helping normalize experimentation and reduce fear around AI adoption.
Importantly, FLCC’s leaders also recognize that AI implementation comes with legitimate tensions. Conversations around environmental impact, bias, ethics, and academic integrity remain ongoing. Instead of pretending those concerns don’t exist, the institution actively creates space to engage with them thoughtfully.
That balance between innovation and reflection may be one of the most important lessons for higher education leaders navigating AI transformation today.
What Higher Education Leaders Can Learn From FLCC’s AI Strategy
Perhaps the most practical advice from the episode came during the final discussion about leadership. Ortloff challenged higher education leaders to rethink their role in AI innovation.
Her advice was surprisingly simple: get out of the way.
That doesn’t mean abandoning governance or ignoring risk management. Instead, it means creating enough flexibility for experimentation to happen responsibly. Institutions that become overly focused on restrictive policies risk stifling innovation before meaningful learning can occur.
FLCC’s approach demonstrates that successful AI adoption depends on balancing guardrails with trust. Leaders must create environments where faculty and staff feel empowered to test ideas, share discoveries, and learn from failures together.
The institutions that thrive in the AI era likely won’t be the ones with the most polished policies. They’ll be the ones willing to remain curious, adaptive, and deeply human-centered in how they approach change.
AI in higher education is moving fast. But as this episode reminds us, thoughtful leadership, open access, and a commitment to experimentation can help institutions move forward without losing sight of their mission.
Enrollify is produced by Element451 — the next-generation AI student engagement platform helping institutions create meaningful and personalized interactions with students. Learn more at element451.com.


