Open Menu Close Menu

Q&A

7 Questions with Dr. Cristi Ford, VP of Academic Affairs at D2L

In the Wild West of generative AI, educators and institutions are working out how best to use the technology for learning. How can institutions define AI guidelines that allow for experimentation while providing students with consistent guidance on appropriate use of AI tools?

To find out, we spoke with Dr. Cristi Ford, vice president of academic affairs at D2L. With more than two decades of educational experience in nonprofit, higher education, and K-12 institutions, Ford works with D2L's institutional partners to elevate best practices in teaching, learning, and student support. Here, she shares her advice on setting and communicating AI policies that are consistent and future-ready.

Campus Technology: As generative AI tools make their way into the classroom, should colleges and universities be setting AI policies at the institutional level, or should it be left up to individual instructors to set their own rules?

Cristi Ford: This is an excellent question. The answer is two-fold: First, we need to allow our teachers the room and opportunity to experiment and leverage AI in the classroom. We want to create learning spaces where faculty are willing to find new ways to leverage this technology that benefit both them and learners. Generative AI can be a helpful tool when used the right way. We should be thoughtful in the ways we mitigate risk associated with this technology, but not stifle the opportunity to use AI as a tool. From there, we need to work on these policies at the institutional level to make sure there is consistency for all learners.

CT: What are some baseline guidelines that every AI policy should include?

Ford: The adoption curve for generative AI is varied. While some faculty are familiar and using it in their classrooms, some administrations are just beginning to realize its helpful capabilities. At the same time, we are seeing the evolution of so many facets of this technology each day. As AI's capabilities and education's use cases evolve, so will guidelines. However, thinking about data privacy, equity, and access issues and the importance of creating transparency around AI should all be top considerations for decision-makers.

CT: Even if an institution has some broad AI guidelines in place, are there extra policies that instructors should consider that fall outside of those standard rules?

Ford: As I talk to faculty, I hear a lot of discussions around the ethical considerations of AI or considerations around bias and academic integrity issues that come with this new surge of technology. Some of these topics may be covered in standard guidelines, but the approaches an institution could take here are vast. While some of these considerations are new, in other ways they are some of the same fundamental issues education has continually discussed. Faculty should consider these topics and more in the context of the discipline they teach to tailor AI use to their learners and their needs.

CT: Should educators be formulating AI guidelines with input from other areas of the institution, like IT?

Ford: While AI has been around for decades, the use of large language models (LLMs) and AI-enhanced tools right now is comparable to the Wild West. Including other areas of an institution like IT will be helpful as institutions work through critical conversations like how they might consider using generative AI and the guidelines to set in place. Creating a task force that encompasses several disciplines and areas of an institution will help ensure that key factors like data privacy, bias, equity, accessibility, and more are incorporated into guidelines and considerations.

CT: How can educators make sure AI guidelines are consistent across courses?

Ford: The focus here is less about an individual educator and more about creating a community culture that is looking at this phenomenon and carving a path forward. In recent months I have been doing AI-focused workshops for faculty on campuses and I encourage those educators to let the workshop be the start of a larger campus conversation. AI discussions shouldn't stop and end with a base set of guidelines — campuses need to create task forces to discuss what are the best ways to consider AI use for their specific needs and individual culture.

CT: What's the best way to communicate AI guidelines to students — and to enforce them?

Ford: Faculty are the front lines of access to our students on campuses today. However, this is a campuswide, institutional responsibility. Creating a culture of open dialogue and communication around AI more generally to include the guidelines an institution creates will be critical to the success of today's learners. From there, other vehicles like infographics or space within LMS systems can be created to communicate these guidelines to students, but bottom line, we need to teach students how to responsibly use these technologies.

CT: How often should AI guidelines be revisited or revised to keep up with changes in the technology?

Ford: Right now, the technology is changing so rapidly that guidelines created 6 months ago are likely already outdated. With ChatGPT came a flurry of other AI-enhanced technologies entering the market — there are now more than 100 out there. Institutions should look to continually evolve these guidelines as we continue to build our understanding of the technology.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

comments powered by Disqus