Open Menu Close Menu

Artificial Intelligence

MIT Machine Learning Model Learns from Audio Descriptions

abstract depiction of brain with audio signal

Computer scientists at the Massachusetts Institute of Technology have invented a new machine learning model for object recognition that incorporates audio descriptions (versus transcripts of audio) along with images.

"The model doesn't require manual transcriptions and annotations of the example [speech] it's trained on," the official announcement explained of the new method. "Instead, it learns words directly from recorded speech clips and objects in raw images, and associates them with one another."

Typically, most machine learning models that incorporate audio require transcriptions of that audio versus using the audio itself. While this current system only recognizes "several hundred words and object types," the researchers who developed it have high hopes for its future.

"We wanted to do speech recognition in a way that's more natural, leveraging additional signals and information that humans have the benefit of using, but that machine learning algorithms don't typically have access to," commented David Harwath, a researcher in MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Spoken Language Systems Group. 

"There's potential there for a Babel Fish-type of mechanism," he continued.

This specific experiment is built upon a 2016 project, but with more images and data added, and with a new approach to the training. Details on how the model was trained can be found in the official announcement of the new project here.

About the Author

Becky Nagel is the vice president of Web & Digital Strategy for 1105's Converge360 Group, where she oversees the front-end web team and deals with all aspects of digital strategy. She also serves as executive editor of the group's media websites, and you'll even find her byline on PureAI.com, the group's newest site for enterprise developers working with AI.

comments powered by Disqus