Open Menu Close Menu

Social Media

Cornell and Google Research How to Block Fake Social Engagement

If you've ever watched a crummy video on YouTube with thousands of views and wondered how it generated such positive attention, you may have been the victim of "fake engagement activities." According to a joint Cornell University and Google research team, these are ploys undertaken by "bad actors" posting fake content or artificially inflating the number of YouTube engagements through automated means or by paying people to "like" the content or add comments. The goal is to game the system by inflating engagement metrics in order to obtain better rankings for videos.

And the problem of fake engagement isn't limited to YouTube. It also surfaces in all of the major social sites — Twitter with fake followers, Amazon with fake reviews and Facebook with fake likes. As "In a World That Counts: Clustering and Detecting Fake Social Engagement at Scale," a paper recently presented at the 25th International World Wide Web Conference in Montreal explained, this kind of spam activity is "all buyable by the thousand online."

The team set out to develop a way to discern fake activities from legitimate ones. The method the researchers developed, called "Local Expansion at Scale" (LEAS), analyzes the engagement behavior pattern between users and YouTube videos. Apparently, accounts posting fake hits or comments show a "stronger lockstep behavior pattern." Groups of users act together, commenting on the same videos at around the same time.

The work was begun by Cornell graduate student Yixuan Li while he was interning at Google. The research was continued under the guidance of John Hopcroft, a professor of engineering and applied mathematics in Cornell's department of computer science, as well as three Google researchers.

LEAS creates a map — an "engagement relationship graph" — that takes account of the frequency of common engagement activities shared between two individuals within a short period of time. The engagement graph allows the researchers "to detect orchestrated actions by sets of users which have a very low likelihood of happening spontaneously or organically."

To evaluate the accuracy of the system, humans manually reviewed postings from accounts LEAS had identified as spammers on YouTube. Even though some of those accounts had been created recently, they'd quickly run up a long list of postings. Their comments were often short text pieces, such as "good videos" or "very cool" or "nice" or "oh" or "lol." The researchers also found a few accounts posting comments under popular songs with content that was irrelevant to the given video but making view and subscribe requests. Additionally, several other "spammy" accounts posted comments with malicious URLs and advertisements.

LEAS now runs "regularly" at Google as one of multiple tools for helping detect fake engagement activities. When fakes are discovered, they may simply be removed on the same day they're detected, or the accounts may be deleted altogether. According to the paper, LEAS has "greatly" expanded the take-down volume on YouTube of fake engagement.

The research work was supported by the United States Army Research Office.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

comments powered by Disqus