Open Menu Close Menu

C-Level View | Feature

Thinking with Colleagues: AI in Education

A Q&A with Ellen Wagner

group of colleagues in virtual meeting

One of the most exciting things about the new generative AI tools is that we mere mortals now have the opportunity to experience some of what our best scientists have been doing for decades. —Ellen Wagner

We've all heard how AI is ushering in massive changes at our education institutions, in our teaching and learning practices, and with the student experience. It seems it's going to affect everyone. How will you make sense of it all, so you can move ahead with confidence?

Ellen Wagner, a partner at North Coast EduVisory and in her career a true veteran — she says survivor — of revolutionary technology change, reminds us that sharing thoughts with colleagues is one of the most powerful sense-making tools an educator has. Wagner herself recently relied on the power of collegial conversations to probe the question: What's on the minds of educators as they make ready for the growing influence of AI in higher education? CT asked her for some takeaways from the process.

Mary Grush: Recently you and a colleague convened a group of "educators who get things done" to think together about AI in education. Could you tell us a little about that meeting?

Ellen Wagner: This past June, I had the pleasure of working with Dr. Whitney Kilgore, the co-founder and chief academic officer of iDesign. We planned, co-hosted and wrote a summary report on a roundtable discussion — a "video summit", if you will — on the topic of AI in higher education. We had been talking about the wave of generative AI sweeping higher ed, and wondering how our higher education colleagues were doing related to ChatGPT, et al. We suddenly realized we could just ask them. And so we did.

We gathered a group of 18 individuals from a cross-section of U.S. universities and a couple of professional associations. These people included professional staff, research faculty, and university administrators.

It was an opportunity to leave title and position aside and engage as seasoned professional colleagues, puzzling through the same things. We were particularly interested in talking to people expected to pick up the "mantle of innovation" at their institutions, advising executive leadership as well as guiding their direct reports. We were also especially interested in hearing from people who, as we knew from their reputations and by direct experience, like to get things done.

It was an opportunity to leave title and position aside and engage as seasoned professional colleagues, puzzling through the same things.

AI is going to affect us all, whether we are dreaming of apps we want to create or trying to figure out how to write a new prompt or use a new recommendation engine. The important thing was not to jump in to the tendency to instruct others on what must be done, but rather to be more open to the potential of shared inquiry.

Grush: It sounds like you needed an organized but somewhat unstructured format for your meeting. How did you "unstructure" the meeting to engage everyone in this purpose?

Wagner: We developed a loose agenda that would take us through an orientation, a general brain dump of "important questions", and three targeted brainstorming sessions. When we started the meeting, we created a Google Drive document, shared the link with everyone, and asked all participants to take the meeting notes along with us.

Even though we met via Zoom, we didn't record the brainstorming sessions, out of concern it might "harden" those conversations. It was important that our participants felt safe talking about the things that were on their minds and not worry about having anything come back to bite.

Grush: And can we find a report?

Wagner: With help from our colleague Karen Vignare, the vice president of Digital Transformation for Student Success and executive director of the Personalized Learning Consortium at APLU, we developed a summary of the meeting so we would have a baseline of our thoughts going forward. You can find the summary, along with the names of participants, posted at North Coast EduVisory.

[What Is Top of Mind for Higher Education Leaders about AI? - North Coast Eduvisory]

Grush: The idea of thinking with colleagues in less navigated waters seems to be key. I know that academics meet and share all the time, in many venues, but your format is distinctive. Could you comment on some of the main takeaways for you, not so much on the thoughts about AI, but on the process of thinking together? What can educators do, both to learn from and to support this process, especially as they explore a "big change" area like AI?

Wagner: For me, there were several takeaways. I'll list my top four here.

First, we all need to see ourselves as participants in the upcoming changes. There's a lot of noise about AI and change out there. One of the best ways I found to cut through the noise is to quit thinking about every possible thing likely to be different. Instead, I'm thinking more about how the AI in my own work environment is likely going to affect me. It's good to remind ourselves that AIs are going to take personalization to a whole new level. There is no reason to follow the crowd just because the crowd is going down a particular road. Sometimes the "road not taken" by others may be exactly right for you.

Second, keep up the narrative around dealing with change, for the benefit of all. Mostly, people aren't so much freaking out about AI as they are freaking out about changes they can't anticipate. Sharing our narratives will help others appreciate that nobody has a manual describing what we are supposed to do next (thank goodness for that, by the way). Helping the members of your community see themselves in the future and telling the stories of what that looks like is a great way to stimulate new thinking and motivate forward momentum.

My third, and perhaps favorite takeaway is to model the small steps that lead to sustained innovation and eventually to the big changes that stick. Showing people the approachable applications of AI will go far over time towards beneficial and ultimately transformational change.

And my fourth in a long list of takeaways is, surround yourself with people who are curious. Curious people will never let you get away with lazy or inattentive thinking. It's also important to be curious yourself. It is easy to get into a thinking rut. It's also easy to take one's own opinions a bit too seriously. Remember, we are all still exploring AI. If someone seems to know all the answers at this point, I just can't trust them.

We are all still exploring AI. If someone seems to know all the answers at this point, I just can't trust them.

Grush: Those are great and thoughtful suggestions. And exploring the link to the summit I see that there are so many things, to think about AI with your colleagues. What were just a few of your own top picks?

Wagner: Let me give you four.

First, I was fascinated by the "Moonshot or Road Trip" conundrum we encountered. It's kind of related to the third takeaway I just mentioned on process. You know the challenge: When a new innovation comes out, everyone is convinced it is going rock our world. Everyone is drawn to the energy of Moonshots. But when we discover all the things that don't work as we thought, or we realize just how much will change once we introduce the innovation more widely, then we need a different approach. Taking more of a "Road Trip" approach to AI adoption might give people a better opportunity to engage in the transformation. It is more affordable to immerse in the immediate, and to take the series of small steps that will move the team to its goal and maybe further.

Second, I think there is one particularly beneficial result we'll see as we have more and more AI in our lives. AI is going to give more people opportunities to conduct very complex transactions by guiding them through the processes, procedures, and barriers that may have limited their participation in the past. In other words, making more "smart tools" available to more people is going to help level playing fields.

Third, AI is likely going to require new rules of engagement between and among institutions, colleges, departments, and students in the environments where it is being used. Here's a perfect example of a generative AI problem that WCET found when they surveyed their members this past summer: Many of their member institutions did not have consistent policies on their campuses when it came to whether using ChatGPT and similar tools was the same as cheating. Even in places where institutions did encourage ChatGPT use, some departments would overrule the central administration. Institutional inconsistencies like that have the potential of creating some real challenges for students to navigate.

And my fourth pick from a long list of things many people in higher education cite as their top-of-mind AI considerations: We are going to need to revisit many of the ways that student assessment is being conducted. I have been really excited by many of the ideas I am seeing from people like Ethan Mollick and Ryan Baker from UPenn, and David Wiley from Lumen Learning, who are showing us ways to learn more, explore more, and try some of these innovations for ourselves. We can't keep holding on to traditional assessments. I have to say, I can't stop wondering what is going to happen to the term paper!

Grush: Will the thinking about generative AI tools ever be defined by formal research results? Is it science?

Wagner: This is all driven by science! And a lot of scientific work over the long term. People have been working with neurolinguistic programming and large language models for more than 30 years. Similarly, consider the long course of development of GPS systems. And recommendation engines. And robotics. And much more.

This is all driven by science! And a lot of scientific work over the long term.

A "fun" example in the robotics realm is the YouTube videos everyone loves of robots dancing in unison. You should take a look at some of the early dancing robot videos — the robots would take a single step and fall over. But the scientists and engineers kept at it, and kept at it, until you can now see something like this demonstration video.

I think that one of the most exciting things about the new generative AI tools is that we mere mortals now have the opportunity to experience some of what our best scientists have been doing for decades.

We are in the very early days of seeing how AI is going to affect education. Some of us are going to need to stay focused on the basic research to test hypotheses. Others are going to dive into laboratory "sandboxes" to see if we can build some new applications and tools for ourselves. Still others will continue to scan newsletters like ProductHunt every day to see what kinds of things people are working on. It's going to be hard to keep up, to filter out the noise on our own. That's one reason why thinking with colleagues is so very important.

comments powered by Disqus