In this episode, Priten speaks with Kari Weaver, a librarian educator and program manager for the Artificial Intelligence and Machine Learning Initiative at the Ontario Council of University Libraries (OCUL), about why existing tools like citation and methodology sections can't capture how AI is actually being used in research and learning -- and what a structured disclosure standard might look like instead. Weaver, who also teaches graduate students at the University of Toronto and created the AID Framework for AI disclosure, walks through the practical and philosophical challenges of building trust infrastructure for an ecosystem that doesn't have bright lines yet. The conversation covers disciplinary divides in how AI use is understood, the global effort to establish a disclosure standard, and why the authorship question remains genuinely unresolved.Key Takeaways:Citation can't bridge the gap between AI-generated ideas and their sources. Traditional citation connects ideas to a discrete, traceable origin. AI severs that connection by synthesizing across sources in ways that can't be pinpointed. Weaver notes this is structurally similar to what Western scholarship has long done to traditional and lived knowledge -- and now researchers are experiencing that same disconnection applied to their own work.A global AI disclosure standard is actively being built. Weaver is co-leading a large-scale effort with the European Network of Research Integrity Offices, the International Science Council, and the Committee on Publication Ethics to develop a consistent disclosure framework through the World Conferences on Research Integrity. The goal is to stop researchers from having to tailor disclosures to each journal's idiosyncratic requirements.AI use in research often falls outside methodology entirely. A researcher translating articles from an unfamiliar language using AI is a real and beneficial use case, but it doesn't fit neatly into a methods section. These peripheral uses still shape how researchers interact with and think about their material, which is exactly why disclosure needs to be broader than methodological reporting.Separating the disclosure from the assignment makes students more likely to do it. At the undergraduate level, voluntary disclosure is hard to get. Weaver recommends having students submit a disclosure rubric alongside their assignment in a separate dropbox. This treats disclosure as a professional skill worth practicing on its own, and it gives instructors a reference point if questions arise about how an assignment was produced.Authorship will likely settle at the disciplinary level, not the universal one. Weaver is candid that she doesn't have an answer to the authorship question. In qualitative research, she sees coding as irreplaceable human work. In STEM fields, AI-assisted analysis may be more readily accepted. She expects discourse communities will develop their own standards -- but that shouldn't delay building consistent disclosure practices across all of them.About Kari WeaverKari D. Weaver (she/her) holds a B.A. from Indiana University, a M.L.I.S. from the University of Rhode Island, and an Ed.D. in Curriculum and Instruction from the University of South Carolina where her dissertation examined the impact of professional development interventions on academic librarian teaching self-efficacy. She is the Program Manager, Artificial Intelligence and Machine Learning with the Ontario Council of University Libraries on secondment from her permanent role as the Learning, Teaching, and Instructional Design Librarian at the University of Waterloo. Additionally, Dr. Weaver is a continuing sessional faculty member in the Department of Leadership, Higher, and Adult Education at the Ontario Institute for Studies in Education (OISE) at the University of Toronto. Her wide-ranging research background includes study of accessibility for online learning, information literacy, academic integrity, misinformation. She is widely recognized as an expert in AI citation, attribution, and disclosure practices for her development of the Artificial Intelligence Disclosure (AID) Framework and is currently the co-lead of the 2026 World Conferences on Research Integrity Focus Track: Toward a Global Reporting Standard for AI Disclosure in Research.