Open Menu Close Menu


Campus Technology Insider Podcast July 2023

Listen: Educating the Next Generation in AI

Rhea Kelly: Hello and welcome to the Campus Technology Insider podcast. I'm Rhea Kelly, editor in chief of Campus Technology, and your host.

What skills will students need for the workforce of the future in an age dominated by artificial intelligence? In addition to basic computer science, data competencies, and the mathematics and statistics behind AI and machine learning, there are a range of social impacts to consider: AI risk, ethics, privacy, questions of bias, etc. All of the above are part of the curriculum at Dakota State University, a STEM-oriented institution with a focus on computer science, cybersecurity, and artificial intelligence. For this episode of the podcast, I spoke with DSU President José-Marie Griffiths about how her institution is preparing students for careers in AI. In addition to her experience in research, teaching, and higher education administration, Griffiths was a member of the National Security Commission on Artificial Intelligence, part of the 2019 John S. McCain National Defense Authorization Act. She has also served in presidential appointments to the National Science Board, the U.S. President's Information Technology Advisory Committee, and the U.S. National Commission on Libraries and Information Science. Here's our chat.

One of the things that often comes up in conversations about AI is how students will need a new set of these skills to succeed in the world of work. So I wanted to ask how you see workforce skills changing in the face of these new technology developments.

José-Marie Griffiths: Well, clearly they are in the process of changing, but they've also been in the process of changing for quite some time. Just to give you a little bit of background on Dakota State, we're really a special-focus STEM institution, where our special focus is computer science, cybersecurity, and artificial intelligence, those areas. Now we teach in other areas, too, don't get me wrong, but this is a special focus and we put a lot of our efforts into that. And we have about 3,300 students, from associate degrees through to doctoral degrees. So why AI, right? That's the question. Well, AI has actually been around for decades. I know because it was around before I was. And there have been recent developments in AI that, and the facilitating technologies for AI, that have really brought AI to the forefront, particularly in the minds of the general public. And if we look at the headlines that have been in just about every media outlet recently, we know everybody's talking about AI, which is, which is probably good. But DSU, we hired our first professor in artificial intelligence in 2016. And we developed two undergraduate degrees in artificial intelligence. So we have a Bachelor of Science in artificial intelligence, which is aligned with our computer science and cybersecurity program. And we have a bachelor's degree in, a Bachelor's of Business Administration in artificial intelligence, where you focus on the application of artificial intelligence in different kinds of organizations, whether it be healthcare, or just generally in IT organizations helping to support, you know, finance, etc, etc. So, our students, because they're so focused on technology, and we want to graduate students who are, as we call it, cyber savvy, whatever their discipline, it's important for students to really know and understand how artificial intelligence works, what it is, what it isn't, because people think it's a lot more than it is actually right now, and what it might become. So we think that we have, we're preparing students to learn how algorithms work, how they can reflect bias, intended or unintended, the bias of the creators, how AI algorithms are trained, so some sense that, you know, you have to have data to train them, and that can be brought into the system, how ongoing data are used, so the AI, the artificial intelligence, is learning in an ongoing way. I think that there's an interesting thing that people need to think about the role of the artificial intelligence and the role of the human in the loop, as it were, and how they work together and should do work together, could work together, don't work together, whatever it is. The notions of risk, and then I do think for us, in particular, because of the institution we are, I think there's a very symbiotic relationship between artificial intelligence and cybersecurity. So we need, in cybersecurity, we need artificial intelligence to deal with all the signals we're trying to manage, to try and detect anomalies and people's access to the systems and services and networks that we're running. At the same time, AI needs cybersecurity to protect the data flows, to protect the models and the algorithms. So we see them going together, which is why we got involved very early, 2016, to say we can't keep doing cybersecurity unless we do AI, and we can't keep doing AI unless we keep doing cybersecurity. So we see those two, the working in different directions. Now, in terms of our students, I classify them into three types. There are the students who will be AI developers, and those students are predominantly in our Beacom College of Computing and Cyber Sciences, which is where the technical artificial intelligence degree sits. There are those who are going to apply AI in their jobs, and that could be people like, like our Business and Information Systems, but in our College of Education, they're looking at how to use artificial intelligence in the K-12 environment and also in our Arts and Sciences program. And then we have those who need a general understanding of the capabilities and limited, limitations, risks associated with AI, sort of the kind of thing you'd want your educated population to know about. So we have all three kinds of students on campus, so I break, I break the audience out that way, and we give different things to different students.

Kelly: So it sounds like you've defined perhaps specific AI competencies for those different categories of students. Could you talk more about those in, in more detail?

Griffiths: Yes, yes, there are. There are, of course, there are a lot of them because AI, well AI, you know, people always think AI is one thing. And of course, it's multiple technologies, multiple applications of technology, etc. It's not one thing. But I've had, I like categorizing, okay, so I've got sort of four types of competencies, so see how my mind thinks. One is sort of there's basic computer science concepts that people need to understand. So data structures, programming, programming for AI, I mean, there's certain kinds of programming languages for AI, like Prolog or Lisp, understanding computer systems, understanding the design of algorithms, understanding the performance of algorithms and optimization of algorithms. So that's a sort of basic computer science piece. Then we have a set of competencies around data. And what do you mean by data? Okay, so we could mean language, we could mean vision, so if you're, you're trying to identify objects, or trying to identify people, or faces, or whatever. In data, there's the notion of perception, what, what are we perceiving what we think, we think we see, whole ideas about uncertainty, numbers, and symbols, and so on. So data is a whole bunch of things too, depending on what you're trying to do. Then we have the sort of underlying mathematics and statistics, this is for, typically, for both the developers of AI and, to some extent, some of those people applying AI and machine learning. So we teach discrete math, logic, theorem proving, probability and statistics, very, very important, optimization again, and encryption. So that's on the mathematical side. And then I have an area that I would call social impacts, the ethics associated with it, legal, regulatory regimes, privacy, how the technology is going to be implemented in and affect society, in ways that hopefully are intended but aren't always. So I think there's an interesting thing with AI that goes a little bit beyond some of the basic technologies. We have some of these issues of social impacts in cybersecurity, but perhaps more extreme in AI. Because in cybersecurity, we're concerned about security, safety, privacy, to some extent. But with AI, and the way people seem to be using or wanting to use AI, there's some interesting, interesting phenomena in that the humans tend to want to vest the AI technology with more capability, more knowledge, more intelligence than it really has, and I would say, dot dot dot, today. Okay, today, we don't have that level of intelligence. I mean, ChatGPT is really just a probabilistic model, guessing the probability of the next word. And creating nice prose, I understand, and because it's nice prose, and not the kind of prose that was developed in the late 60s and early 70s, people suddenly said, Oh, yes, it sounds like me, or says it better than I could, or whatever. And so they vest they that level in it. And that's, I know you haven't asked me that, but that's where some of the difficulty lies in moving ahead. But certainly some of the societal implications one way or the other come out of that vesting of, Oh, it must be more than I think it is, because I don't understand how the widget works.

Kelly: How do you combat those kinds of misconceptions that students might have when they're vesting too much in, into what the technology can do?

Griffiths: Well, I think our students, we have to break it down, break it down into something that's understandable. So I like to go back to some simple things. What is the most publicly accessible, low-level intelligence that's available? Search. And we all use search, right? We all use Google or Bing or whatever we use. And we assume that we're getting a pretty accurate response, mainly because we're getting so much back. And when we ask a question from a system that sounds so knowledgeable, and let's face it, these chat bots typically are quite assertive in the way they present the answers, right? This is the answer and, and then give you all sorts of context around it, which you assume is accurate. And so people then vest that information in them. And the danger is, of course, because it's, the base, the base corpus of knowledge that it was trained on, is any piece of language, let's face it. I mean I always laugh that these things are very American oriented, and if maybe if I go to the UK, those bots have been trained on what British expression. But if you're training a large language model, to get better at language, it really doesn't matter what you give it as long as it's good, coherent language. It doesn't matter whether it's actually accurate or not. It doesn't have to be true. So in scraping various datasets now, what we have are things that we know are untrue. We know that there are things on the internet that, there are multiple, there's fact and there's opinion, and there's deliberate misinformation, and there are malign campaigns from other countries, and so on. So if you scoop all that up, as I say, it's fine for learning language, and having better ways that it's expressing itself, but if you're using it to answer questions, you have to be very careful about what corpus of knowledge was used to train it. And there's, there's, so now you have a double risk, right? You have the risk that we're vesting more intelligence in it than it does. And you have the risk that the basic input is not necessarily accurate, timely, etc, etc. And so that's the second risk on these things. So I would hope our students would learn and understand that a little bit or be a little bit skeptical, and start really looking at what these systems are doing. And I'm so glad that these systems have started to hallucinate, because now it gives you a certain level of skepticism. And now, you know, everyone says, Well, should I use it or not? If we use narrow AI, which most AI is right now, with the exception of the language models and search, they're the only two very general things. If you've got a narrow AI, and it's very, very good, AI can be very, very good at detecting patterns, and detecting patterns that the human can't see. The human is still very good at detecting things that the AI can't see. So actually, that relationship of one working with the other becomes important. And that's been the case in a lot of things. But let's say medical diagnoses — a medical diagnosis, would you actually allow an AI to diagnose somebody without you, the physician, ever going back and looking at the scans directly? I don't think so. I think you're going to want to check the scans and say, Oh, this is what it says, Let me see. Does that match my understanding of it? I also think that's a good way to use these chat boxes. If you, so a while back, I was preparing for something for a short talk. And I came out with what I wanted to say. Here were my issues, I said I'd be a little controversial, here my seven controversial issues. And then I did, then I went and I said to ChatGPT, what are the seven, what are seven controversial issues in AI? And it had seven issues that were incorporated with mine. And I had a couple of extra ones that it didn't have, but I was using it just, have I left something out? If, however, I had gone the other way, and asked the chat box first, what are the issues, that would have framed my thinking. It would have boxed me out in some respects, if I were to say, oh, gosh, that's great. I don't need to do anything else. So when I think about dangers, I think about loss of, loss of our ability to know exactly what we're dealing with, not being free and open with our minds in a way, not, not perceiving what we want to perceive in, in the artificial intelligence, assuming that it's accurate when it's not necessarily accurate. I think those are the risks we really have right now. And for some people, it might be easy to abdicate responsibility to a technology. And I think that's, that's actually going to cause considerable concern.

Kelly: Generative AI just, it sort of feels like creativity happening automatically. And yet, using it can actually inhibit your own creativity by limiting your thinking. That's just kind of an interesting oxymoron.

Griffiths: Yeah, we were, we've, because we are who we are, we sort of leaned in. So when we saw some institutions were sort of banning the use of ChatGPT and concerns about plagiarism, we were leaning in to say, Okay, this is around, we have to deal, we're a technology institution. How do we use it? How do we incorporate it more effectively into the curriculum, so that our students are more aware, as I said, the pros and cons, so they can go out into the world and help advocate for the good applications and warn people about the negative implications? That's sort of who we are. So if you, if you were talking to other institutions, you know, they might have a different approach. But we are all about technology, and we can't afford to sort of just dismiss it, because, quite frankly, if we dismiss it, it is going underground. And if it's underground, it's in our cybersecurity space, as it were. It'll be down there on the black market of everything.

Kelly: Yeah, well, could you share some examples of how you've, you know, what does it look like in practice, kind of utilizing or embracing, I guess, ChatGPT in the classroom in these types of ways?

Griffiths: Well, certainly, we've been teaching large language models in our more technical degree programs. Students need to learn how they work, and they need to understand the sort of probabilistic models that go with it. And you know, now ChatGPT as they're incorporated with multiple languages, so there's some interesting research to see about the, the pattern of language in different countries and therefore, statistical probabilities that go with it. We also have students in Digital Arts and Design, and I'm sure they're playing with DALL-E 2, and looking at how what they produce as humans, the human creative piece, and if you like, the machine creative piece, and bringing them together, in interesting ways. So that's another area. Our College of Education people have been using all sorts of technologies, robots, and simulations, and they're looking at how to incorporate this into the teaching environment. So I think that, for example, AI can be very helpful in reaching students with various learning styles, particularly neurodivergent students. We just need to figure out what's the best way, how do we, how do we get students to learn, that there are many more ways to learn now when you can have the, sort of the, the tutor assistant could be the AI, along with, with the teacher. I just think there are many different ways in which we can incorporate technology. We have an honors course in artificial intelligence — I taught in that course, actually. It was collectively taught by faculty from across the university. And mine was a little bit different. I taught, you know, international collaboration and competition. Because as we evolve AI, and as we're looking at policies for AI and country, you know, the European Union has its regulation, its AI Act, and the White House puts out a whole bunch of things in this country. How does that affect the way we think about the way we incorporate technology? But we have people teaching all different aspects, ethics, machine learning. Students came, and they had different professors in different weeks, and we taught them about different things. And it was really, because it was an honors course, a lot of conversation about, what does this mean? What does it mean to us as humans? What does it mean to organizations? What does it mean to communities that we can use AI? And to some extent, if we allow models and artificial intelligences to make decisions for us, then we are responsible for, for the outcome. And I think that's an important piece that I would hope that we would get more people talking about this societal impact of the technology because I think that's where the questions are. It's easy to talk about technology itself. But most people don't spend a lot of time, especially technologists, don't spend a lot of time talking about these broader, dare I say, softer issues.

Kelly: That kind of makes me wonder what you think higher education's role should be in, in advancing research in AI, in AI and kind of exploring these issues, because is there a responsibility there? Because if you're leaving it to the corporate world, that's going to be a completely different outcome I think.

Griffiths: That's a great question. And, you know, to some extent, the, the advances in research in AI and in the enabling technologies, cloud computing, etc, have all come increasingly from the private sector, because, because of the push for people to want to use these capabilities. But I do think there is a role for universities for advancing research in AI and to inform how the workforce might evolve. So I think we have a clear responsibility to communicate the positives and negatives of AI. We shouldn't let our, our own communities assume it's all, it's simple, it's all good, or it's all bad. Of course, the answer is much more complex. I think there are interesting areas of expansion for the universities to look into, as I say, the roles and responsibilities of the human and the artificial, depending on the application. And it does depend on the application. As I said, you don't, would you want your doctor never to look at you, but just to listen to whatever the AI tells it about you? Would you allow a surgeon to do surgery on you if he hadn't really looked at your health record and your scans and everything else? I think we wouldn't, I think we'd be a little bit more careful. But I also think, we have to think also, everything's happening so fast with AI right now. And it's partly because it was, it was developed in an open way. So I think one of the things, this very, very rapid evolution and deployment of AI, because everybody now wants in on this, right? Every group wants to be able to plug in to the, to the large language models, and there's a risk in everybody plugging in indiscriminately, I think. But how will that affect us as humans? How will it affect our cognition, how we understand the world, our perception, and our ability to interact with, with technology, with each other, and with society general? I think those are important factors. How can we ensure security and safety and privacy as we increasingly allow these technologies to create waves and waves of data, and flows and flows of data, without regard to, Oh, my gosh, you know, let's, let's think about this for a minute, because it's there, and it's out there, and in, in orders of magnitude that we probably have never fully expected in a long time. So I think that's it. And then I think universities can work to help form and facilitate the development of guidelines, you know, guardrails, policies, and regulation and law. You know, we had conceived a while back that were so heavy into cyber, what we were talking about, we're doing cyber agriculture, we're doing cyber health, we have worked with the local law school on cyber law. We haven't really talked about AI health, AI agriculture, AI law, for one, that's the same thing goes along everywhere. And then I think that there needs to be ways in which developers of AI need to be able to build in or require what we call transparency, explainability, how does it work, although we can't always explain complex systems, and then some level of accountability. So out of that comes, you know, should we have, well, there's a group in the UK, I just read last week, that has decided that, it's got, groups of technologists and groups of educators at different levels are working to review various AI technologies, if you like AI, artificial intelligences would be the term I would use, for K-12 schools in the UK. So they're going to look at those and assess a certain level of risk. These are the ones that seem to be okay, and these are the ones I would stay clear from. So that teachers don't have to make those decisions themselves. So I could see panels, I could see healthcare panels, I could see panels in different areas of healthcare, for example, or different areas of business, trying to do this. We've also got some people on our campuses that we, that can try things out, test things out, see how it can be used, how it can be abused. I think we could, should use that sort of population of young people to help test things out. We're sort of living labs, if you like. And if we, what it would be good to do is to do that anyway, because our students are going to try and do it, right? They're going to try and break into systems, they're going to try and see, does it work or does it not work. But what we need to do is to make sure that they've looked at that carefully, and they've understood that, you know, have they been thorough? Have they got the right, have they, have they really tested this out appropriately? What other kinds of data or input could they have provided to the models? Does the model have bias? How do we avoid bias in models? And mostly until now, bias has occurred either completely in a thoughtless way or because we simply didn't have sufficient data. And now we're getting to the point with, with sensors all over the place, that we're getting a lot of data. And so I see a research role for the universities. I see a, obviously the teaching role, and I think there is a role in helping, participating in the conversation about policies and regulations.

Kelly: Thank you for joining us. I'm Rhea Kelly, and this was the Campus Technology Insider podcast. You can find us on the major podcast platforms or visit us online at Let us know what you think of this episode and what you'd like to hear in the future. Until next time.

comments powered by Disqus