Funding, Grants & Awards

Deep Learning Privacy Research Gets Google Go-Ahead

Two professors, one from Penn State and the other at Cornell University, have received grants from Google to research privacy issues connected to "deep learning." Deep learning is the name given to an approach for doing machine learning that represents data in multiple layers of increasing abstraction. The method can discover complex patterns in large datasets useful for making "intelligent" decisions in the real world. For example, the application of deep learning is behind search engine efforts to identify the contents of an untagged photograph so that the right kinds of images show up when a user searches on text or a particular phrase.

The new grants will allow Adam Smith, an associate professor of computer science and engineering in Penn State's School of Electrical Engineering and Computer Science, and Vitaly Shmatikov, an associate professor of computer science from the University of Texas at Austin who is serving as a visiting faculty member at Cornell Tech, to pursue two tracks. One investigation will look into what deep learning systems can "leak" about "sensitive inputs"; the other will work on developing a system for preserving privacy in deep learning.

Google Faculty Research Awards, most recently announced in August 2015, support cutting-edge research in the areas of computer science and engineering and related fields. The latest list includes 113 teams of recipients from universities around the world exploring geomapping, human-computer interaction, natural language processing and 15 other broad categories.

When their funded work is completed, Smith and Shmatikov will share their results with relevant groups at Google and have the chance to collaborate with those groups; but all the output of the research will be made publicly available.

"Deep learning is already widely used, especially at Google, to recognize speech and images and to do lots of other things like drive cars," said Smith in a press release. "Deep learning is often applied to very sensitive data, however. We are interested in making sure that sensitive, private data remain private."

Smith will develop a system to enable many data holders to collaborate on ever-increasingly accurate neural-network models without sharing their training datasets or leaking sensitive information about their contents.

About the Author

Dian Schaffhauser is a senior contributing editor for 1105 Media's education publications THE Journal and Campus Technology. She can be reached at dian@dischaffhauser.com or on Twitter @schaffhauser.

comments powered by Disqus

Campus Technology News

Sign up for our newsletter.

Terms and Privacy Policy consent

I agree to this site's Privacy Policy.