Open Menu Close Menu

Transcript

Campus Technology Insider Podcast September 2021

Listen: Why AI Needs the Liberal Arts

00:13
Rhea Kelly: Hello and welcome to the Campus Technology Insider podcast! I'm Rhea Kelly, executive editor for Campus Technology, and your host.

Colby College in Maine is investing $30 million to create the Davis Institute for Artificial Intelligence, the first cross-disciplinary institute for AI at a liberal arts institution. Among its goals: to democratize AI, moving it beyond the realm of large universities and technology companies to transform teaching, learning and research in a wide variety of disciplines. Yet it's not just about how AI can inform the liberal arts, according to my guest Amanda Stent, inaugural director of the Davis Institute. It's also about how a liberal arts perspective can bring about a better understanding of whether, how, and in what ways the use of AI can benefit – or harm – our society. In this episode of the podcast, we talk about the most critical AI skills for students, the ethics behind AI algorithms and what you should ask yourself about the AI tools you allow into your home. Here's our chat.

Amanda, welcome to the podcast.

01:31
Amanda Stent: Thanks, I'm so happy to be here.

01:33
Kelly: So first, I thought maybe you could introduce yourself, share a little bit about your background.

01:39
Stent: Sure. So my name is Amanda Stent, and I am the incoming inaugural director of the Davis Institute for AI at Colby College. Before Colby, I worked at Bloomberg, Yahoo, AT&T Research and Stony Brook University. And I am very excited to be going back to academia. And I'm very excited about the potential that this institute has.

02:01
Kelly: Yeah, I understand that you have an extensive background in natural language processing. And where you were involved in developing the technology that led to the virtual assistant Siri, which I'm sure, you know, kind of puts right into context for a lot of people.

02:16
Stent: Yeah, that was a while ago. DARPA, the Defense Advanced Research Projects Agency, funded a large project called CALO, for cognitive assistant that learns and observes. It meant different things over time. And there were about 200 academics across the country who worked on that project, and I was one of them. I was a brand new baby assistant professor then. So subsequently, the integration team at Stanford Research Institute, who led the project, they took that and packaged it up as Siri.

02:46
Kelly: Interesting. So yes, Colby recently created the Davis Institute for Artificial Intelligence, and it is the first cross disciplinary institute for AI at a liberal arts college. So I just want to know, why is it important to study, to incorporate the study of AI in the liberal arts?

03:06
Stent: Yeah, so there are two answers that I would give. The first is that AI really touches all of us these days, at least especially in the United States. You get up in the morning, you interact with your cell phone, you're interacting with multiple AIs, you get in your car, you turn on the smart GPS, you're interacting with an AI, you get to work, you're interacting with an AI, you open a website, you're interacting with an AI, just throughout your day. And people maybe don't understand the actions that they can take to influence those AIs both positively and negatively, and how it affects us as humans and as a society. So really want to help people across many disciplines understand how it, how they can interact with AI, and how it can interact with them, and how they can affect it. And the second is that the liberal arts really are about questioning and understanding. And it's about time we did some questioning of AI and the way it's been developed, not just from a computer science perspective, which is my background, we, you know, build things, but from all sorts of different perspectives, question the whether, whether we should be building it, the how, how we should be building it, and what it's for, how it can affect us as cultures, as groups, as cities, as colleges.

04:23
Kelly: Do you see AI as, or having an understanding of AI becoming a necessary job skill? And is this kind of a way of ensuring that liberal, liberal, can't say that, liberal arts students are prepared to enter the workforce?

04:39
Stent: Well, I, I don't think that we all have to become programmers. That's the first thing. We don't all have to be computer scientists, we don't all have to become programmers. However, what I have seen across multiple jobs that I have had is that as AI becomes more and more prevalent, having a basic understanding of what is actually going on in an AI, and how you can affect it, is a really critical job skill and can really help people get ahead. And I think the goal is to graduate from Colby, to enable any Colby graduate, no matter what department they go through, or what program or major they take, to be able to speak critically and intelligently into how AI is used in their discipline, in their field, and how it should be ethically used to drive society in a more human centered way. So that's really the focus. I do think that it's a great add on for a liberal arts education.

05:35
Kelly: I definitely want to dive into the ethical aspects in a little bit. But I'm also curious, I know that Colby has been retooling courses across the curriculum to have significant AI components. And so what are some examples of what that looks like? And are there any surprising ways that AI is pairing with certain disciplines?

05:55
Stent: Well, there's certainly some that resonate more with me. And then there are some that are quite surprising. Just this fall, there is a course where they are looking at the application of AI to survey research, which Colby does quite a lot of; they are looking at the intersection of AI and economics and finance, which is very related to my recent past; and they're looking at AI in ancient, well medieval art. Now that may seem surprising, but AI has been effectively used to decode historical ciphers and to identify, decode unknown languages. So it makes more sense than it perhaps seems to be on the, on the surface. AI is really today, a lot of AI, not all of it, but the vast majority is really just about pattern matching and pattern finding. And those sorts of techniques work really great when you're trying to look at something that is very old that you don't understand, but that was created by humans. So it is structured. For example, can we identify which plays that were supposedly written by Shakespeare, were actually written by Shakespeare using AI, which is a well known question. So across the curriculum, I think the faculty at Colby have come up with some really phenomenal opportunities for students and for interdisciplinary research. And I think it's going to be a very interesting fall, a very interesting winter and a very interesting spring.

07:15
Kelly: You've mentioned decoding ancient languages — that sounds kind of like linguistics, but you know, taking a technology approach to it.

07:23
Stent: Yeah, absolutely. Linguistics is about the structure of language. And AI is about structure and pattern, so they can interact quite well together. As a natural language processing researcher, I'm very aware that recently, the sorts of AI techniques that we apply to understanding languages are not perhaps the techniques that a linguist would use. And I think, not only in this area, but it's a good example, actually, across the board. AI researchers should be paying more attention to the subject matter experts in other disciplines. Are there techniques and theories from linguistics that we could make better use of for AI, or better help AI to inform those disciplines. So I think really, across the board, we should just be listening to the subject matter experts in different disciplines as to how AI as a tool can be used to help them.

08:14
Kelly: That actually reminds me of a quote that I read, a quote from you in a news article from Colby College, where you said, "Being at Colby will give me the opportunity to work with students and faculty across disciplines to not only use AI to inform those disciplines, but also ensure that all of those disciplines inform AI." And so it's the second part of that, that I'm really curious about, like what does it mean to have these liberal arts disciplines inform AI?

08:42
Stent: Yeah, this really comes back to what we were talking about in terms of understanding. I think what a liberal arts education gives you is the ability to learn throughout your career, to pick up new skills and new fields throughout your career. And what AI has been until now is largely, not entirely, but largely computer scientists saying, I will develop a thing for your field, and you will like it. Sort of like you go into a restaurant and instead of getting a menu, they say we're going to give you grilled cheese and you will like it. And instead perhaps we should be working with other disciplines. And I speak as a computer scientist here. And learning to understand how those disciplines think about the world, think about their, what they do, and how AI can really be used to help them or what interesting creative things we can do with AI within those disciplines. As opposed to just you know, I'm a computer scientist, I'm going to tell you how AI will serve you and you're just going to take it and like it. To me, what's fascinating about new career opportunities is that ability to learn. So that's why, that's partly why I'm so excited about this. Like, can you imagine being able to spend your day talking with psychologists and economists and artists and musicians and computer scientists and statisticians and environmental scientists, and just learn how they do their work, what they think is important, and have really deep conversations about how AI techniques can help them and how, and what they really are looking for to drive their fields forward. I mean, it's just going to be so much fun.

10:23
Kelly: Yeah, it really it sounds like such a source of like, inspiration and, and maybe creativity too.

10:29
Stent: Absolutely. Yeah, creativity is a good word.

10:33
Kelly: So as, as faculty are sort of retooling their courses and incorporating AI tools and methodologies into their teaching, what's involved in supporting that, like it, you know, is there a learning curve for faculty and, and how, how's the Center sort of involved in that?

10:49
Stent: Sure. And it's not just incorporating AI into teaching, but also into research, because Colby faculty do research and they do pedagogy, so they develop new educational techniques. There is definitely a learning curve when you start working with AI. And it's not a one time learning curve, because new techniques are always coming out. So you have to keep staying up to date. In fact, over the past five years, there's just been this huge Kuhnian revolution in AI, this huge scientific transformation, where deep learning has really led to advances that we would not, never, ever have foreseen happening in such a short space of time. And many of those are impacting us as cultures, from facial recognition to machine translation to medical health imaging, just across the board. So that yes, there is a learning curve. And the Institute is working with faculty, faculty can propose their own learning outcomes. So maybe a faculty member would like to take a course or participate in a symposium. But we're also looking at summer school programs where faculty can work with other researchers from other places, and graduate students and Colby undergrads and students from other places, just for a week or a couple of weeks or a month and really do develop their own understandings and do their own research at Colby while it's really beautiful in the Maine summer months here. So that's one of the things we're looking at as an enrichment activity for faculty and their research collaborators and their students.

12:12
Kelly: So going back to the ethical issues around AI, I mean, I'm familiar with the concept of algorithm bias. But that's kind of the extent of my understanding. So I'd just love to hear, you know, what all of the considerations are there.

12:25
Stent: Sure. Quite a lot of AI today is really machine learning. And machine learning is just finding patterns over large datasets. So there are multiple sources of, multiple places in that process where something can go awry. The first is that your data may be not great, not a great sample, biased. The second is that your outcomes may be biased. So algorithmic bias can, it can be influenced by bias in the data or influenced bias in the outcomes. And beyond bias, you may just get bad results, just really bad results. The second part of it is even if your data is a good sample, a good representative sample, human beings are marvelous and terrible, right? We produce amazing art and music, and we also are awful to each other. And we have our own biases. And if you have a good sample, then what the algorithm will reflect is the biases in humans' behaviors and the biases that humans have. And that's how we see some of these things, these notorious examples like computer vision algorithms labeling people who are African American as gorillas — just really terrible examples that reflect historical bad societal decisions. And then there are other sources of ethical issue with AI, including should, should this thing be done by an AI? We shouldn't, we shouldn't just assume that because we can do something we must do it. You should think critically about whether and why we are doing the things we're doing. For example, if we develop an AI to help with hiring, that AI might be very useful, it might suggest me jobs that I would never consider based on my skills and background. Or that AI might be terrible, it might be used to hide jobs from me, based on demographic information that I would otherwise be very suited to. So when we develop such an algorithm, we should think about how to put hedges and fences around it to protect us from the worst possible outcomes and really help guide towards the best possible outcomes. And then there's, you know, questions about how AI is affecting society. And some of those questions may be ethical questions as well. And one of them is just this, that because AI today is trained on large amounts of data, AIs can only really reflect what is measured. And there are things that we don't measure that are that are hidden, and the AI will never be able to reflect those. And I'll give an actual language processing example. We have machine translation for many languages. You can find them from big tech companies like Google and Microsoft, machine translation for 700 languages. But there are 1000s more languages in the world than those 700 languages. In what ways are AIs, these machine translation systems, leading to the death of indigenous languages, because we don't collect data from those languages to train AIs? So people can't, can't learn to speak them use AI to help them communicate with other people. And so that may lead to the death of indigenous languages. What we don't measure we can't really develop an AI for. So what are we not measuring or counting or tracking that we should be counting or tracking to develop good AIs for? That's a question that is on my mind quite a bit.

15:41
Kelly: You know, you mentioned having terrible outcomes from an AI or machine learning. And sometimes those outcomes are obvious. But are there instances where like, it's kind of difficult to know if the results that you're getting are terrible?

15:57
Stent: Yeah. And this gets back to, you know, this question about what are you seeing, what are you tracking? So if, let's use this hiring system, as an example that we were talking about before, if I develop an AI for hiring, I'm going to train it on historical hiring data. Now, we know that historically, at least since the 1980s, there haven't been very many women in computer science. So this algorithm is going to learn based on the last 30 years of data, that you know, women don't really do computer science. So then if you ask this model to suggest a job for a woman, it may be 10% less likely to suggest the job for a woman than for a man. And that 10% seems very small, or even 1% seems very small. But when you add it up over millions of people, it can be very large. And this actually has happened with Facebook, which makes [recommendations], shows ads, for example, for housing and for jobs, and those and decides how to show you a particular ad based on machine learning. And that machine learning is trained on historical data about who clicks on ads, who gets the jobs, who gets the houses, so it will be less likely to show houses in certain neighborhoods, to people from certain demographics, and less likely to recommend certain jobs to people from certain demographics. And the likelihoods are small — but over 300 million people in this country, that adds up to a lot of detrimental outcomes. And you don't even know because literally you do not see the ad. So how researchers have detected this is they create fake profiles of people from different demographics, and then they see what ads are shown to those fake people. And that's how they identify this very subtle and hidden thing. Now, was there bias before when humans did it? Yes? Should we aim to do better? Absolutely.

17:44
Kelly: It's almost like you have to, like reverse engineer.

17:48
Stent: There's a lot of reverse engineering, yeah. Because you and I, what computers are really great at, what AI is really great at, is looking at huge piles of data and finding patterns. Like if I said, you know, Rhea let's sit down, you and I, and look at all the hiring decisions that were made over the last 10 years, you'd be like, no, that's going to take another 10 years just for you and I to look at them. The computer can look at them overnight. But then, because we didn't look at it, we don't know what the computer learned. So we do have to do a little bit of reverse engineering to probe it. And there are techniques being developed to probe machine learning models, to inspect what kinds of information they're leaking, how the probabilities are working out if it's biased or not.

18:31
Kelly: It does sound like you know, the connection with liberal arts and questioning things. It's just, just the exact types of things that need to be questioned.

Stent: Absolutely.

Kelly: Spreading awareness, too, I mean, among the general public on questioning things.

18:46
Stent: Right. Right. So this comes up a lot where people… I, couple years ago, I was planning a talk, and I asked people at a community event that I was at how they liked their smart speaker. This was around 2019. So the first Alexa had come out like 18 months before. And about half the people, this was mostly an elderly group of people, more than, more than 50 years old, about half of them were like, "Oh, I love my smart speaker. I use it all the time. I use it for cooking, I use it to find information." And about half the people were like, "Never, I'm not having one of those things in my house." And then there was one person who said "I love my smart speaker, it plays whatever I want." And I said, "What do you think that I'm asking about?" And , it turned out that he meant his Bluetooth speaker. So, so he needed a certain kind of education, And then the people who use it all the time, I said, "Do you know what kind of information it collects?" So that's a different kind of education. Are you thinking critically about this thing that's in your house, listening to you all day and all night long? And then the people who were like I never have it in my house, then they have a different kind of conversation: "Why wouldn't you have it in your house? What are some of the benefits that smart speakers could bring you?" Or like, are you making that decision just because you don't like electronics or for privacy reasons, are you aware of the privacy constraints we can put on these devices, those kinds of things. So different types of people, different kinds of education. But that smart speaker, which is in, which was in 17 million homes within six months of being released, just in the US, that is a big societal impact that we should think critically about.

20:23
Kelly: Well, now I'm dying to know, and if you're not comfortable answering, it's okay, but do you have a smart speaker in your house?

20:29
Stent: Absolutely not. And the reason is, the reason is because I personally am less concerned about the privacy. So I have a smartphone, so basically, I have the same. But I'm not convinced enough that I am enough of a computer security expert to set it up so that it doesn't like, while I'm traveling, some hacker can't log in and get it to turn off the heat in my house. And then I come back and it's Maine winter, right, and my house is frozen solid in a block. And I have to, you know, so I'm not convinced enough that I am good enough at computer security to set it up so that it's constrained. That's my concern.

21:06
Kelly: Makes sense. So what are some of the long term goals of the Davis Institute?

21:13
Stent: The first one, really a big one, is to enable every Colby graduate, as I said, to be able to have their discipline plus AI, so to be able to speak critically to how AI can and should be used within their discipline. A second is to enable Colby faculty to collaborate with faculty and researchers and other places to develop interesting and creative AI adjacent things. So AI plus their discipline. The third is like, when people hear AI, I would love for them to think about the innovative projects and courses and techniques that Colby faculty, students and collaborators have developed that are, that by that time, like this is a 10 year, 15 year goal now, are being used at colleges and universities across the country. So when people hear Colby, they should think Colby Oh, the place that developed like the tech, the thing that allows every high schooler to learn about AI. Oh, Colby, the place that did, spun off three interesting AI companies, startups. Oh Colby, the place that really works across the Northeast to bring together researchers who are interested in AI. Oh, Colby, the place that like helps bring AI, a better understanding of AI and more applications of AI to Maine city, state and local government. So real societal impact as well, undergrads, faculty, and their collaborators, and then the whole of Maine and the Northeast.

22:41
Kelly: Wow. And then looking forward, you know, what are the most exciting sort of trends and developments in AI that are going on and that you think are most important to watch going forward?

22:53
Stent: I think this discussion of ethics is a big one. Five years ago, if you asked the average AI researcher, sadly, that AI researcher probably would say that they weren't even qualified to speak to ethics and AI. So making sure that researchers understand that we have a responsibility, that people in AI understand that we have a responsibility to think about what we're doing and why we're doing it. That's a big one. Deep learning, which is the thing that really transformed AI over the past five years. With any scientific revolution, there's a period when it's like, oh, amazing. And then, and then there's a period when people exploit it. So we're sort of middle, maybe end of exploit. And then cracks start to emerge in that theory, or that technique, like people start to identify the weaknesses, the problems, and then there's a period of exploration where people try new techniques and, and tools, and then there's another revolution. So I think we're sort of like big revolution, deep learning fascinating, huge societal impacts, like AI finally living up to the promise. And now we're at a period where the cracks are starting to appear, and people are starting to explore. So over the next several years, I think we'll see other techniques and theories applied, and then maybe another evolution and who knows what that one will look like. Maybe it will be the actual application of representation of the world to AI. So right now it's a lot of pattern matching. But an AI doesn't, an AI, if I if I pick up this phone, on my, an AI will say, well, that's a phone because I saw a lot of other phones. But it can't say, oh, that's a phone because I know what it feels like to hold a phone. It's not really embodied or grounded in the world. Maybe that's the next revolution. Whatever it is, I think the most creative things over the next five years in AI will really come from subject matter experts working with AI people, hand in hand, not as a consultant one way or the other, but hand in hand. Those will be the really transformative things. Whether it's medicine, or finance, or history.

25:03
Kelly: Well, thank you so much for coming on. It was great. It's so fascinating talking to you.

25:07
Stent: Great talking with you. Do you have a smart speaker in your home?

25:11
Kelly: I do. But I guess I should check those security settings.

25:15
Stent: That's not even an AI question. Some AI questions for you to think about what that smart speaker in your home are, every time you interact with it, it's learning a little bit about you. But it's learning a picture of you that you, that is only shaped, it's not a real round picture of you, it's a sort of flat picture view shaped on your understanding of the capabilities that it has and actual capabilities that it has. What is the understanding that you want that agent to have about you? And what is the understanding that you want the company behind that agent, whether it's Google or Amazon, or whoever, to have about you? And what is, how do you see that understanding reflecting in the ads that you get and the other services that are offered to you? I mean, at least on the plus side, there are now more women's voices in these systems. When I was a grad student, the speech recognition systems were trained mostly on men, so they didn't recognize me at all. But now they recognize women better, so that's a good thing. Yeah.

26:10
Kelly: Yeah, and children, too. I have a nine year old who of course loves, you know, asking Alexa for stuff.

Stent: Yeah.

Kelly: All right. Thank you so much.

Stent: Thank you. Have a great afternoon.

Kelly: Thank you for joining us. I'm Rhea Kelly, and this was the Campus Technology Insider podcast. You can find us on Apple Podcasts, Google Podcasts, Amazon Music, Spotify and Stitcher, or visit us online at campustechnology.com/podcast. Let us know what you think of this episode and what you'd like to hear in the future. Until next time.

comments powered by Disqus