In this episode, Priten speaks with Anna Zendell, a social worker turned educator who oversees healthcare management, human services, and wellness programs at Bay Path University, about what it takes to rebuild a curriculum around AI when the stakes are patient outcomes. Zendell is currently piloting an AI-enhanced program from the ground up, designing courses where a closed AI system mentors students through interactive activities while faculty retain grading authority and instructional presence. The conversation covers why traditional learning outcomes don't translate cleanly into AI-driven instruction, how adult learners in healthcare face unique pressure to acquire AI literacy for careers that already demand it, and the trust gaps between students, faculty, and administrators that complicate adoption.Key Takeaways:Curriculum doesn't absorb AI -- it has to be rebuilt for it. Zendell found that standard learning outcomes written with Bloom's Taxonomy are too broad for an AI system to use as mentoring scaffolds. Her team breaks each outcome into granular component steps, essentially teaching the AI how to guide a student the way an experienced instructor would.AI is the first classroom technology to split faculty, students, and administration into opposing camps. Some faculty add zero-tolerance rubric rows while others experiment eagerly. Students range from uneasy to already reliant. Zendell describes a three-way perception gap she hasn't seen with any previous technology, including the transition to online learning.Healthcare employers aren't waiting for higher ed to figure this out. Zendell regularly scans job postings for healthcare leadership roles and finds AI literacy and AI tool proficiency appearing with increasing frequency, particularly in informatics, clinical data analytics, and healthcare finance. Her students are asking for these skills and feeling the urgency themselves.A student tester changed the entire design process. Zendell recruited an informatics student with an interest in healthcare AI to take each module as a learner before it goes live. That feedback loop -- where the student flags where prompts mislead or where the AI drifts into unproductive territory -- became central to how the team iterates on course design.The real danger isn't AI itself -- it's losing the habit of questioning it. Zendell's deepest concern is dependency: that convenience erodes the capacity to critically evaluate AI output. In healthcare especially, where students might default to ChatGPT instead of dedicated clinical interfaces, the gap between accessible and appropriate matters.About Anna ZendallAnna Zendell is the program director for the MS in Healthcare Administration program. For over a decade, she has directed degree programs in healthcare administration, health sciences, and public administration. She teaches regularly at the graduate and undergraduate levels. A major emphasis is on ensuring equitable and accessible higher education for students of all abilities by leveraging the power of online learning and the unique attributes that adult learners bring to their learning.Prior to her academic administration and teaching work, Anna oversaw operations and evaluations for grant-funded research projects focusing on issues such as walkable communities, community health education, and dementia interventions. She developed enduring interdisciplinary partnerships with organizations, local governments, and community members. She provided professional development and continuing education for healthcare professionals. Key focus areas in Anna’s work include fostering meaningful inclusion in workplaces and communities and addressing health disparities, particularly around chronic illness and health promotion.Anna earned her doctorate and master’s degrees in social work at the University at Albany with a focus on management and community systems.