Open Menu Close Menu


Campus Technology Insider Podcast May 2024

Listen: Tapping into AI Across Every Part of the University

Rhea Kelly  00:12
Hello and welcome to the Campus Technology Insider podcast. I'm Rhea Kelly, editor in chief of Campus Technology, and your host.

Touro University has embarked on a system-wide initiative to incorporate artificial intelligence into all of its programs — not only in teaching and learning, but also across research, operations, and policy. The institution has also created a new academic AI position to serve as a dedicated AI leader, facilitate the use of AI across the university, and cultivate AI thinking in students, faculty, and staff. For this episode of the podcast, we spoke with Dr. Shlomo Argamon, Touro's recently appointed associate provost for AI, about his role, the importance of AI in higher education, how to prepare students for the new world of AI in the workforce, and whether or not AI opponents could be waiting for their time to attack. Here's our chat.

Hi Shlomo, welcome to the podcast.

Shlomo Argamon  01:16
Thank you very much. A pleasure to be here.

Rhea Kelly  01:19
So I know that you were recently appointed associate provost for AI at Touro University, and I think it's so interesting that AI is making its way into, you know, higher education roles. So I'd love to hear more about your new role and your interest in AI.

Shlomo Argamon  01:36
Yep, thank you. I also find it fascinating and wonderful that AI is finding its way into higher education as well as all across society, because I've been researching and teaching AI for 30-odd years. I like to say that I've been doing AI since before it was cool. And I've been interested in it simply because I'm interested in understanding the human mind. I've been just fascinated with the way in which computational methods, computer science and so forth, can be, can be used to do things that are intelligent in various ways. And the last five to 10 years have seen a tremendous explosion in the usefulness of these techniques. And that's why it's finding its way everywhere, including into higher education.

Rhea Kelly  02:26
Would you say that AI role, I mean, there's so much focus on generative AI in, you know, as a trend right now, but I'm sure that, that your role encompasses more than just that. So maybe you could talk about how you define AI for your role and sort of what, what is important to know about that?

Shlomo Argamon  02:48
That's great question. I'll first talk about a little bit about my role and the various ways in which I'm looking at integrating and using AI at Touro University, and then how that, what the implications are in terms of, you know, different aspects of AI and so forth. So I view my role as looking at AI, you know, across the university as a whole, in four main areas. One is in education: How do we use AI in education? And also, how do we teach AI? And what should we teach about AI to our students in various fields? It's, it's now the case where all of our students need to know something about AI — AI literacy of some sort. So that's one piece. The second piece is in terms of looking at how do we integrate AI across the research in the university. So helping to facilitate the use of AI in research in, in many areas, because now research in all areas, AI can contribute a tremendous amount. So looking at integrating that across the university. Third is in terms of university operations: How can we use AI to improve our operations? Can we use AI to do marketing more effectively, to improve admissions, to help with budgeting and financial processing, and so forth? What can we do with AI there? And fourth, and which connects to all three of the above, is looking at policy. So what kinds of policies do we need around AI, in terms of security, in terms of what our faculty, staff, and students should or should not be able to do with AI for various reasons. So bringing all of that together. And in addition, there, especially when we think about education, there's a lot more to AI than just machine learning and generative AI and so forth. And there are a lot of conceptual tools that we have developed in AI research over the last 60, 70 years that can be useful to our students as they go out into the world. The notion is to teach our students something about what I would call AI thinking: ways of approaching problems, ways of solving problems that come from the field of AI. Thinking about how to use data to solve problems, ways of thinking about how to structure problems, and how to formulate the problems that you're facing in new ways that can help you to solve them.

Rhea Kelly  05:34
I think it's so interesting that, you know, that AI has a place across so many different areas. You mentioned teaching and learning, research, kind of that workforce readiness piece, and then the operations. So from a practical standpoint, how do you prioritize, you know, where to start with those things? Even, even how much time would you spend on generating policies versus, you know, helping integrate into operations in, let's say, a business office or something like that? Like, where, how do you know what to do first?

Shlomo Argamon  06:10
Well, to be honest, the answer to that question is it's an evolving answer, because it changes. And I think that the question of prioritization is something which, when, once I figure out an answer to how to prioritize things, within a week, I'm sure that those answers will change again. So it's important to be sort of agile, and to be responsive to conditions. You know, right now, what I and people across the university, what we're trying to do is to set some foundations. So looking at foundations, there are foundational questions of policy that need to be dealt with. And so that's, that is definitely an important priority. Setting up foundational educational programs across the university, in terms of, in terms of specific courses to try to start teaching fundamentals of AI to all of our students, as well as establishing programs in AI, focused programs, both on the more technical side as well as the more application side. That's, that's an important first step to building the foundations. And then another key priority is educating our faculty about AI, about its possibilities, about its risks, how to think about it, how to work with AI, that's, that's also a very high priority. One statistic shows that in most higher educational contexts, somewhere between 10% to 20% of faculty are familiar with and use generative AI in some fashion, whereas at least 100% of students use AI in some way. Now, that is a tremendous disconnect between the faculty and the students. It's fundamental that we ensure that all of our faculty are familiar with AI, are familiar with what it can do, familiar with what our students can do with it, and start rethinking how they teach to fit this new, this new world, really.

Rhea Kelly  08:25
I'm curious what your relationship is with IT leadership, because it seems like, you know, as a technology implementation, you know, across campus, it kind of goes hand in hand with IT.

Shlomo Argamon  08:40
Yeah, it does. And I work very closely with our, with our IT leadership, particularly on the, on the policy front, because a lot of, a lot of these questions in terms of, in terms of what our faculty, staff, and students can do with AI in the university, in the context of the university, depends to a great extent on what tools we have available, and also what the specific technological capabilities and risks are. So, for example, if we think about security risks of using AI systems, so one big risk that not everybody is aware of, is if you use one of these systems like ChatGPT or Bing Chat, anything that you type in potentially is going to be taken and used as training for the system to improve itself in the future. Which means that if you use it to, you know, to ask for help in, I don't know, designing university strategy, well, everything that you told it and asked about university strategy, all of that information, which is presumably private, now is sitting there at Microsoft or OpenAI or somewhere else, accessible, in principle, to someone else. So we need to know, and so I worked very closely with them to use their, I mean, they have the expertise in how do we assess these, these, these systems. You know, what are the options that are available? What can we install? Or what can we use and access and make available to our, to our faculty and to our students along these lines? So it's essential that anybody working on, on AI, really any technology-related area like this, but, but certainly AI, does work very closely with, with, with the IT people. At the same time, I think it's worth saying that my role, or any similar role in terms of dealing with AI in the university context, is, is not a subset of IT. Because we're dealing with not just purely questions of software installation and working with how people work with the technology, but we're looking at other issues as well. So we're looking at questions of, you know, how do we define plagiarism, for example; those sorts of issues also become important.

Rhea Kelly  11:08
It sounds like laying the foundation was sort of your step one. What is your vision for step two? Once the, the foundation has, is set, what, what is your vision for what's next?

Shlomo Argamon  11:22
Well, the, the, the aim, our aim in terms of education, is to prepare our students for the new work and social environment that AI is facilitating, that AI is, is catalyzing in the world. And this is not just teaching our students how to use AI as a tool so that when they graduate, they'll be able to use it in their jobs and so forth. Because, first of all, the AI that we teach a freshman today is not the same AI that's going to be available four years later when they graduate college. It's going to be changing tremendously. So first of all, we have to give them the conceptual tools to be able to approach the new AI, not just when they graduate also, but two years later, four years later, 10 years later. It's going to keep changing. So they have to have the conceptual tools to be able to grow and to learn with these technological changes, but also to get the, the tools to be able to be agile, lifelong learners. Because so much in, in terms of professional life, and in terms of society, is going to continue to be changing very, very rapidly. One of the key things in AI, and you've mentioned generative AI a few times, one of the things that a lot of people talk about is disinformation and misinformation from, from generative AI. And there's intentional disinformation; there's also unintentional disinformation. You ask generative AI to, to give you an answer about something, and sometimes it just makes things up when it can't give you a good, accurate answer. We need to teach our students how to think and look at things critically in a way that allows them to navigate this much more perilous, as it were, information landscape, but also without becoming cynical and mistrusting of everything that they see as well. So the goal on the education front is to be able to teach students these, you know, more, you know, general skills that allow them to both use and to critically deal with AI as they move forward. And related to that is just simply bringing together people from across the university, you know, in all different fields. There's a sense in which AI is something that, since it touches all professions and all disciplines to some extent, I believe that it can help with addressing the malaise, in a sense, that a lot of higher education sits in. We hear it today a lot of discussion about the crisis in higher education, from, from various, you know, and there are, there are various ways of thinking about it and various problems that people talk about. The integration of AI in a broad and centralized fashion across the university, I think it helps to address this because it can help to bring people together across the university. We need to think about the relationship between people and technology, which means also thinking very deeply about the relationship between people and people. And bring together people, in different disciplines across the university, around issues of shared import for them. While the implications are different from field to field, the basic principles of how we need to think about AI, these principles also of critical thinking and critical analysis, are general. And it gives us a direction for what we should be teaching our students as sort of at a deeper level than just the professional knowledge and skills in their particular discipline, but really how to think. And this is something that I think we've lost the way a little bit over the last couple of decades in higher education. I personally see things as having, that higher education has a little bit of an identity crisis, in terms of, what are we trying to be? What are we trying to teach? And I think this may, and this is my hope, be something that helps us to regain a focus and revitalizes higher education.

Rhea Kelly  16:10
It kind of harkens back to the liberal arts approach to education, you know, it's kind of ironic that technology would be the thing that brings back that mindset.

Shlomo Argamon  16:22
Very much so, very much so. I mean, I, you know, that's, that's, that's exactly the case. I mean, going back to your first question, one of the things that I love about doing research in AI and teaching AI, is that it's a field which brings together technology with many of the liberal arts: philosophy, linguistics, psychology, and so forth. So very much so.

Rhea Kelly  16:48
What are some of the challenges that you've encountered so far? Like, especially when you're integrating AI education across so many different disciplines that I imagine are traditionally sort of in their own silos, is that a challenge?

Shlomo Argamon  17:05
It hasn't been a challenge so far, but I've been, I've been in this job for two months so far. So I haven't had much time to get too much done yet. But I expect that there will be some challenges moving forward along those lines. The biggest challenge that I've had so far is, actually, it's a wonderful challenge, is everybody, pretty much without exception, that I've spoken to about what we're trying to do with AI across the university, is tremendously enthusiastic and very, very positive about it. And many, many people have lots and lots of ideas on how to do it. So my biggest challenge is your earlier question about prioritization of trying to see, you know, what are all the things that we could try to do and could figure out, and well prioritize, you know, what should we be focusing on? And, you know, what should we not be focusing on yet? So, I mean, that's, that's actually, that, I mean, that's a wonderful problem to have. I do, you know, I do anticipate that there probably will be some, some questions in terms of, you know, balancing off the needs of some units versus other units when we're looking at integrating things. But so far, I've, I've really been very happy to discover that that hasn't been too much of an issue yet.

Rhea Kelly  18:28
So you're not running into any opponents of, of the, you know, the focus on AI? Or have you?

Shlomo Argamon  18:36
Well no, actually, not yet. I mean, it's possible the opponents of AI are laying low right now and waiting to attack at the right time, but hopefully not.

Rhea Kelly  18:48
What would you say are kind of the most important AI skills for students to master going forward? I know you've talked about a lot of things, but can we kind of synthesize that into the, the vision of the future, so to speak?

Shlomo Argamon  19:03
Well, if we want to sort of really focus in and say, you know, sort of what are the main things that students need to learn about AI, you know, across the, across all of the different disciplines, I think, well, the first thing is, how to, how to use generative AI usefully for their work. And this involves a couple of different things. One is understanding the capabilities of it, the area that's, that's often called prompt engineering. How do you, you know, craft inputs to these systems that will get you the kinds of outputs that you're looking for? And this is an interactive process. So learning how to, how to manage that process effectively is an important thing to learn. Part of that as well, which is not often talked about, is how do you evaluate the results, and how do you, how do you develop confidence that what you're getting is indeed what you want, both in terms of, you know, a specific product, or if you are trying to develop it and trying to use it in a way that can be used again and again for similar questions, well, how do you know that your result on this question will work for the next question that you ask it? Or the next question that you ask it? So evaluating these results is, is also fundamental. And this, this, this relates back to a more general point, which is the issue of critical understanding of what you're seeing and what you're getting. So not just in terms of using AI as a tool, but this critical understanding is, it's a critical tool for just being an informed citizen today, in the age of AI, much more so than in the past. One of the things that generative AI has done is it's destroyed, in a way, one fundamental assumption that we've always had for thousands of years about communication, which is: If you see a, you know, you see a text, you know, a document that's well written, fluent, coherent, well organized, seems to express expertise in the topic, and so forth, you can assume that it was written by somebody with intelligence and knowledge. And unless they're actively trying to trick you, which does happen sometimes, you can, you have some reliance on the source of this document, because the document is well written and put together well. That's no longer true. AI can produce well written things that are total rubbish, or, you know, or, you know, more than that, if that can be said. So we need to learn and we need to teach our students, you know, how to be critical, how to, you know, look at these things and say, "Okay, this looks good, but what is the argument that's actually being made here? What are the sources? What's the evidence that's being presented?" And these are, these are classical things that have been taught in, you know, liberal education for hundreds of years, if not more, but the importance is much greater. And we now need to separate it from the fluency of what we're evaluating to really be looking much more deeply at the specific evidence and whether we can really trust what we're seeing. What can we trust? What can't we trust? So, so that, those I think, are really the key things that students in all areas really need to learn about AI.

Rhea Kelly  22:51
And not only for text, but soon, you know, video, and imagery, and things like that.

Shlomo Argamon  22:58
Oh very much in video and imagery, and you need to think about the implications. Imagery, in a sense is, it's more dangerous, in a way, because images get into your, into your brain in a way that, that text does not. Text, reading text stimulates, in a way, your critical faculties to some extent, because you have to be interpreting the text as you're reading, whereas images look like reality. So, so you have to more actively activate your critical faculties when, when looking at images or videos.

Rhea Kelly  23:36
So obviously, Touro University is putting, you know, AI at the head of sort of a, you know, in a strategic mission kind of way. Do you think that's a thing that every university should be doing?

Shlomo Argamon  23:49
I think that it's a thing that every university should be thinking about seriously. Whether AI should be at the forefront of strategy, that, it's hard for me to say that that's going to be the case for every university. There's a tremendous diversity among higher education institutions. And I think that, in fact, going, going back to the theme of the malaise of higher education and the, the identity crisis, I think that part of the identity crisis is a feeling that all higher educational institutions are or should be aiming at being the same thing, in a way. I think that there is, there is an idea like that out there, even if it's not explicit, often. And I think that we need a diversity of higher educational institutions. We need that understanding among, you know, those of us who work in higher education, but even more so, we need that understanding to be spread across society, so that people who are looking to go to college or looking for higher education of any form, you know, understand that different institutions have different purposes. They're doing different things. And students who are coming and looking to study should be asking, "Well, what am I looking for? And what is this institution, what are the goals of this institution? What is the character of this particular institution? And looking for that match: not just, you know, which one is ranked higher on some list of universal rankings? That's not the question. The question is, what's a good match for me?

Rhea Kelly  25:21
Thank you for joining us. I'm Rhea Kelly, and this was the Campus Technology Insider podcast. You can find us on the major podcast platforms or visit us online at Let us know what you think of this episode and what you'd like to hear in the future. Until next time.

comments powered by Disqus