Open Menu Close Menu

Crowdsourcing

Research Project Mixes Humans and Machines for Better Crowdsourcing

How do you maintain the "big picture" in crowdsourcing when a bunch of people are handling different pieces of the work? A team of researchers at Carnegie Mellon University and Bosch argue that computers can guide the process and do a better job than humans could in many cases. In a pair of papers presented this week the team explored the concepts of using a computational system to manage the oversight of a complex project through the small contributions of individuals and using a hybrid approach that combines human judgment with machine algorithms.

"The Knowledge Accelerator: Big Picture Thinking in Small Pieces," and "Alloy: Clustering with Crowds and Computation," were both presented this week at an ACM conference in San Jose, CA.

The first paper describes a project to address a common problem with crowdsourcing: If the work is too complex, no single individual can maintain the big picture; the work tends to become bottlenecked if more than one person maintains oversight; and the work can slow down or fall apart altogether if the primary individual doing oversight leaves the effort.

To overcome those weaknesses in crowdsourcing, the researchers set up a prototype system that performed "information synthesis" and then made new assignments to participants based on what was uncovered as those individuals worked on their small pieces.

"In many cases, it's too much to expect any one person to maintain the big picture in his head," said co-author Aniket Kittur, associate professor in the university's Human-Computer Interaction Institute, in a press release. "And computers have trouble evaluating or making sense of unstructured data on the Internet that people readily understand. But the crowd and the machine can work together and learn something."

To test the prototype, the team fed 11 different questions into the system, including:

The resulting "articles" were then compared to the articles that cropped up as the top five Google results for each question. Crowdsourcing outsourced through Amazon's Mechanical Turk service was used throughout.

In almost all cases, the articles developed by the prototype were rated significantly higher by reviewers (also hired on Mechanical Turk) than comparison web pages, including some written by experts with "well-established reputations." The two exceptions were travel-related, including the first question listed above; those pages, the researchers suggested, didn't do as well because travel is a "strong Internet commodity," and websites in this segment put a lot of effort into curating "good travel resources."

Bosch is already adapting the Knowledge Accelerator in order to streamline the development of diagnostic and repair information for complex products.

Co-author Ji Eun Kim of the Bosch Research and Technology Center in Pittsburgh, said she considers the system "a powerful new approach to synthesizing knowledge" and expects to apply it in "a variety of domains" within the company "to unlock the potential of highly valuable but messy and unstructured information."

Alloy, the second, related project, uses machine learning to provide structure and coherence to the information collected by workers. The humans provide judgments to help the machine learn how to categorize, then it automatically recognizes patterns and clusters the information. What's particularly new, the researchers stated in their paper, is the "cast and gather" approach used, in which machine learning provides the structure by which crowd judgment is organized.

The research was supported by the National Science Foundation, Bosch and Google.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

comments powered by Disqus