USD Voice: Changing the Conversation with Students

A Q&A with Chris Wessells

At the University of San Diego, developers are working on an incubator project that allows students to "talk" with the university, in their own voices and with their own choice of words. The program, called USD Voice, is fully operational and currently lets students conduct day-to-day business, from questions about events on campus, to directory information and campus locations and hours.

The system is much more than a hierarchy of responses based on voice command recognition — it uses natural language processing to analyze queries and consult appropriate data sources. USD Voice is still in Version 1.0, and developers are experimenting with a variety of information applications. They are also examining their options for leveraging technology development and tools already appearing in the marketplace — technology emerging from companies like Amazon, Google, and Apple.

Here, Vice Provost and Chief Information Officer Christopher W. Wessells explores how USD Voice is enhancing his institution's communications with students.

"I absolutely think we are headed toward the integration of natural language processing with teaching and learning practice in higher education."  — Chris Wessells

Mary Grush: Have many universities explored natural language processing for their communications with students and other constituencies? How did your institution get started with USD Voice?

Chris Wessells: I'm sure there are other universities experimenting with natural language processing in various contexts, but USD Voice is the first project we know of that applies this technology to the day-to-day events and business of the university.

USD Voice is one of the areas where we are investing in experimentation with new technologies. I view Voice as an incubator project: The underlying premise is that natural language processing will help people interact expeditiously with the university, to get information in a simple way — information that is highly relevant for them.

Grush: Is your work with USD Voice foreshadowing a trend in computing applications?

Wessells: We believe that, looking to our future, there will be a range of devices and services available on the Internet of Things that support this type of information processing — for example, technologies similar to Amazon Echo, or Google Home, or Apple's Siri, and others that are already available.

Grush: What will this type of technology do for people?

Wessells: It will allow people to get information quickly — information that's relevant to their immediate or future needs. The simple task of asking questions and receiving answers is a way to advance the speed and ease by which we gain information. It simplifies and streamlines queries.

Grush: Where are you in your experimentation or development with USD Voice?

Wessells: It is fully operational and deployed throughout campus. We began work on the project in early 2016 and launched Version 1.0 in the summer of 2016.

But for my IT organization, Version 1.0 is still an experiment, just in its infancy. As a development project, it has been successful thus far — especially if you consider that we already see many ways that it can be improved and have plans to follow up on those improvements.

Grush: Was USD Voice developed specifically for the students?

Wessells: Interestingly, no. Originally, we talked about USD Voice as a means of getting parents and alumni more engaged with activities on campus. But we quickly realized that there's also a big potential for students that we could explore. So, we looked at the implications for parents, alumni, off-campus students, our 2,500 residential students, and other constituent groups. Watching this project "take off" has been exciting.

Grush: It sounds like you were originally planning to use this technology to push out particular information to specific groups… is that the case?

Wessells: Yes. We started doing that with parents and then moved on to other groups. For example, our off-campus students may have felt out of touch with "what's going on" — so we discovered that using USD Voice with the Amazon Echo to keep them informed was yet another benefit of this type of technology.

When you think back on the evolution of technologies that we have relied on over the years to "deliver information", the Web, and Web portals, were the earlier means to do that. Our mobile initiatives represent further advancement in communications and information delivery… Now, we're seeing natural language processing as an emerging technology for the conveyance of information. This is simply the progression of technologies and strategies to make information available to people.

Grush: Clearly, you've thought about all your constituencies and what this technology progression means for them, especially as you push out useful information to them. But getting back to where the technology is going, what happens when you take natural language processing and pair it with data sources?

I would think that pairing could create considerably more interaction than voice recognition triggers that push out a hierarchy of responses. Is there a special opportunity there for academic applications?

Wessells: Yes, of course we pair natural language processing with relevant data sources, but the picture is not quite that simple. Let's step back for a minute, to consider natural language processing coupled with data sources. At USD, we already have hugely rich data sources — and they are integrated well.

This is largely because my team lives in the world of APIs and integration. Organizations like universities need not only to have data available, but also to have experienced teams of people who can integrate everything together.

One of the big challenges we discovered with USD Voice is making sure we are designing services that rely on the quality of data. People working in different business units may enter their information differently. For example, if an admissions officer is doing something in Salesforce for a prospective student, and they abbreviate a particular phrase, that may present a potential problem in the dataset that will eventually be extracted and used in our natural language process.

There is an enormous array of complexity around the quality of data — not just the simple introduction of it, or the mass of it. Most organizations can't really begin to consider natural language processing applications unless they have a lot of highly integrated, accessible data.

Grush: What does your data infrastructure look like at USD, and what kinds of data might you use, either now or in the future, for USD Voice?

Wessells: Right now we have more data than we can put to use with Version 1.0 of USD Voice. Remember, with Version 1.0 we are largely answering queries about hours, locations, and so forth.

We actually have a massive amount of data for every individual, and a lot of that may be accessed in future versions of USD Voice — given strict authentication in place, if we are accessing student records.

Just to give you a quick sketch, in terms of data sources, we have our Ellucian student information system — drawing from BannerXE, so of course that's an extremely big data source.

Our portal technology, which is another Ellucian product, contains all sorts of information on events. And we have campus events, news, and basic university information available by tapping data sources we've built through Salesforce, our custom Web news and events solution, and other platforms.

Salesforce is huge because we've created all sorts of data on that platform: We track what students go into which events, what they register for, what they show up for… We port student information into Salesforce from Banner, so as a result it gives us a massive data source on a given individual. And these can be current students, prospective students, and even alumni. Salesforce is rendering a true 360-degree view of the individual student experience.

Oracle EBS holds data about faculty and staff… still more data to think about as we approach development on future versions of USD Voice.

Ultimately, everything a student has done in the course of doing business with the university is represented in some form of data we already have that could eventually be used by USD Voice — if we integrate all that data, and do proper authentication and security, of course. Again, you have to have a skilled team to make all of this work. Fortunately, at USD, we do.

Grush: Ultimately, then, for future versions of USD Voice, how will you build and provide responses from all that information? Are you looking at a very structured, hierarchy of responses built from pre-coordinated index terms? Or will you handle fairly complex post-coordination of terms or concepts? How will you get to the point where natural language processing is intelligent enough to discern how any given user communicates — using jargon within a particular subject discipline, for example?

Wessells: Research being done by players like Amazon, Google, and Apple, will, over time, greatly improve the underlying natural language processing technology to become more adaptive in nature.

And I couldn't predict at this point how they will do that. But I do know that many of the "gotchas" that we've had since we've been working on this project, are in the category of the manner in which people express themselves.

As examples, dialects, abbreviations, and similar specific elements of how any one person will speak, can cause the system to break down right now. And of course we have challenges with pronunciation of words, with idioms, or slang — the list of pitfalls goes on, as you can probably imagine.

Grush: So are these barriers just too big?

Wessells: No, let's just say they cause this technology to be "not so perfect" at this point in time.

Grush: Given what's at stake — university communications — don't you want and need it to be close to perfect?

Wessells: Absolutely. But looking down the road, a lot of this processing is going to depend on vendors — like Amazon, and Google, and Apple — these bigger players will need to perfect the technology.

At this point, our developers do a lot of hard coding to deal with specific issues they are aware of — such as the particular manner in which we might expect people to phrase a question. But, again, in the future, the technologies that Amazon, Google, Apple, and others develop may have enough intelligence to adapt to these types of challenges. It's our hope for the future in this particular realm. And looking at the marketplace, honestly, I'm optimistic that these are areas where we'll see the kind of dramatic improvement we need.

Grush: That begs the question, though, is there some risk there, too? Are these companies going to be up to the task of quietly, reliably, and unobtrusively facilitating the conversation between a university and its own students?

Wessells: I will say that with the massive teams of developers that those companies have available to them, this is absolutely going to be an issue that they can accomplish. And frankly, there is no way that most university developer staffs could possibly do this on their own.

Of course, we don't want to give up the keys to the castle to companies like Google, Amazon, and Apple… We are always thinking about our students, our data, and our community.

And of course, any discussion of platforms for analytics just wouldn't be complete without a reference to the huge amount of data that is building around each and every one of us — from business relationships with our organizations, to data collection devices on the Internet of Things… our digital footprints are out there.

But think of it this way: It runs parallel to what many universities have done to outsource things like mail and calendaring and docs. During the foreseeable future, as natural language processing matures, we all will have moved to either Office 365 or to Google, or to a combination of both. Typically IT operations at a university will not have enough resources to advance these major technologies on their own… and so, we outsource.

We may use software developer kits for voice and other applications — as we do already with Amazon's Alexa Skills Kit — to refine specific aspects of our applications and get more accurate results. But at the end of the day, we are definitely dependent on those larger companies for building and enhancing the technology overall.

Grush: What about the role of your developers? It seems like there might be a place for higher education IT department developers to work alongside these companies, to get involved in supporting or advocating for selected functionality. And in doing that, possibly they could be effective on behalf of the university, to protect its brand.

What roles are you finding for your developers?

Wessells: With my development team, I just have to say, first, they are extraordinary, and I really need to make sure that they are working on more advanced and interesting projects — to engage them, and to keep them here. If they were just working on operational "stuff", these super-talented people would get bored pretty quickly.

In terms of managing the developer side of the house, I really must offer them incubator projects — including things that are so forward-looking that we don't have all the answers: The projects, or some aspects of them, may or may not work out. And we don't know just exactly what path the development may take.

That said, USD Voice is one of those incubator projects, as I pointed out earlier. There is room for innovation, and there is room for risk.

So I guess to answer your question, the developers will be responsive to what's needed… But we don't really know right now, exactly what that will be.

Grush: What indicators do you have, whether you may be having some success with USD Voice at this point?

Wessells: Avi Badwal, our senior director of enterprise technologies and the USD developer team are already working on Version 2.0. There is a sense of success from a development perspective, and what's more, a sense that we are working on a project that is really going to benefit the university.

Grush: What about development on the academic side… for example, pairing USD Voice with intelligent tutors, or doing some kind of productive integration with the learning management system?

Wessells: Learning management systems at this point have been more effective for mainstream courses — helping with online courses and making content readily available to students. I do hope eventually learning management systems will incorporate more advanced technologies like natural language processing. But, we are still early in terms of thinking about that kind of integration.

Of course, there is a service orientation to USD Voice. When we first went down the path of just conceptualizing what natural language processing might be at a university, we thought that this technology was going to be great for taking on services — maybe services at the level of a reception desk, for example.

That was an interesting idea, but we quickly realized that an information kiosk-type function was not going to be the ultimate outcome of advanced work with natural language processing.

Integration with academic services and learning management systems are paths that this may ultimately follow. I absolutely think we are headed toward the integration of natural language processing with teaching and learning practice in higher education.

That said, with our USD Voice Version 1.0, we really haven't scratched the surface of how natural language processing can be applied within the academic side of the house.

We're really still at the initial stages with USD Voice, dealing with events — what's open, closed… where things are located, and so forth. But that's a great place to start, as it turns out.

Grush: Does USD Voice consult external data sources?

Wessells: Yes, one of the interesting moments was when one of our cafes on campus had not entered its hours of operation — Amazon Echo was smart enough to consult external data sources and provide some information from Yelp — as an interim step to solving the issue. It was a nice learning experience for us.

The same could be said for your question about the intelligent tutor — for example, we might take some external resources like Khan Academy into account, after exploiting our local campus resources to the fullest. We could allow Alexa to make the proper offering of resources.

Grush: It sounds like you are already drawing on many existing internal and external resources. How far are we away from seeing all that you've just described, and similar scenarios, in production — especially on the academic side?

Wessells: My sense is that we are still a few years away from production work on the academic side. But USD Voice, applied in the academic space, could be very powerful for helping students with their learning experience and preparing for exams.

Grush: It sounds like your developers have a lot ahead of them as they build USD Voice into the future. What will be some of the highlights of your work towards Version 2.0?

Wessells: The domains that we've focused on so far with USD Voice have answered questions like what's open or closed… where to eatwhat computer labs are open and exactly where are they… I think in Version 2.0 we'll be substantially expanding these domains: campus news, the availability of textbooks and course resources — just as a few examples.

As for a bit more advanced functionality, we'll be looking at how to incorporate Voice in authentication… not just the content of what's being said, but the intonation and inflection of the speaker. While this may not be implemented with Version 2.0, it's a direction we'll be experimenting with, along with other improvements.

Grush: What do you see as the force that will drive your development of USD Voice in the future?

Wessells: The whole concept of student success at the university is key. At USD, we simply try to find new and innovative ways to foster greater student engagement.

Schools like ours — private, liberal arts institutions — are focused acutely on student retention and completion. At USD we feel that certain technologies — like natural language processing — offer us new possibilities to develop for our educators' toolkit, to help the students succeed.

It's important to understand that perspective. Serving our students with better tools is the central reason we're looking into advanced technologies and investing in the development of incubator projects like USD Voice.

[Editor's note: Images courtesy of the University of San Diego.]


Featured

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs AI Content Safeguards into Law

    California Governor Gavin Newsom has officially signed off on a series of landmark artificial intelligence bills, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.

  • close-up illustration of a hand signing a legislative document

    California Passes AI Safety Legislation, Awaits Governor's Signature

    California lawmakers have overwhelmingly approved a bill that would impose new restrictions on AI technologies, potentially setting a national precedent for regulating the rapidly evolving field. The legislation, known as S.B. 1047, now heads to Governor Gavin Newsom's desk. He has until the end of September to decide whether to sign it into law.

  • illustration of a VPN network with interconnected nodes and lines forming a minimalist network structure

    Report: Increasing Number of Vulnerabilities in OpenVPN

    OpenVPN, a popular open source virtual private network (VPN) system integrated into millions of routers, firmware, PCs, mobile devices and other smart devices, is leaving users open to a growing list of threats, according to a new report from Microsoft.

  • interconnected cubes and circles arranged in a grid-like structure

    Hugging Face Gradio 5 Offers AI-Powered App Creation and Enhanced Security

    Hugging Face has released version 5 of its Gradio open source platform for building machine learning (ML) applications. The update introduces a suite of features focused on expanding access to AI, including a novel AI-powered app creation tool, enhanced web development capabilities, and bolstered security measures.