2 CS Students Crack Case of Identifying Fake Twitter Accounts

Twitter parody accounts for Mike Pence and Steve Bannon. Source: Robhat Labs.

Twitter parody accounts for Mike Pence and Steve Bannon. Source: Robhat Labs.

Two undergraduate computer science students at the University of California, Berkeley have undertaken a job Twitter has been struggling with: figuring out when incendiary tweets have come from a bot instead of a real person. Ash Bhat and Rohan Phadte recently released Botcheck.me, a Google Chrome browser extension that places a button onto every Twitter profile and tweet. By clicking the Botcheck.me button, a user can tell whether the account is likely run by a person or an automated program.

As the duo explained in a report published in Medium, they undertook the work specifically to address political propaganda bots, which are intended to weaken and subvert American political discourse. These bots are automated or semi-automated Twitter accounts that live behind the façade of a real person and that often retweet other content instead of tweeting their own, especially fake news.

Fake photos showing President Obama awarding Anthony Weiner, Bill Cosby and Harvey Weinstein the Presidential Medal of Freedom. Source: Robhat Labs.

Fake photos showing President Obama awarding Anthony Weiner, Bill Cosby and Harvey Weinstein the Presidential Medal of Freedom. Source: Robhat Labs.

Bhat and Phadte's extension uses a model to identify those accounts using tweeting patterns of bots. Among the characteristics of bot accounts:

  • Account creation dates align with the days just before elections;
  • They attempt to get followers to follow other accounts also classified as having bot-like behavior;
  • They tend to tweet much more frequently than average users — in some cases every few minutes;
  • They also tend to retweet frequently from parody accounts, which themselves may not exhibit bot-like behavior, but do draw the majority of retweets from accounts showing bot behavior;
  • They tweet fake news, fake photos and other forms of misinformation;
  • They represent both major political parties, using hashtags #impeachtrump and #maga ("make America Great again") disproportionately.
  • They may change usernames for the same Twitter account;
  • They may have real people behind them who sometimes create original tweets and respond to inquiries from other users (easily managed behind the scenes with tweetdeck);
  • They hack into Twitter accounts to use followers as a way to expand their own network; and
  • They buy Twitter accounts that have been compromised and then resold.

The duo, who run RoBhat Labs out of a Berkeley apartment, are also the minds behind NewsBot, which identifies the political leanings for a given article posted to Facebook; and Bhat worked with another student to create an app that stays on top of White House website changes to alert the subscriber to new executive orders and memos.

Now the pair would like Twitter to take up the cause of helping people understand the basis for what they read. "Our hope is that the technology that we create can be helpful to individuals [to] take proactive action about the information they read, but we believe that the responsibility to moderate malicious automated content on Twitter falls on Twitter — not the users," they wrote on the explanation for their latest program. "We feel that this is a problem that has led to the recent political discourse threatening the peace and harmony within our nation. We want to extend our help in any way that is needed."

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • data professionals in a meeting

    Data Fluency as a Strategic Imperative

    As an institution's highest level of data capabilities, data fluency taps into the agency of technical experts who work together with top-level institutional leadership on issues of strategic importance.

  • stylized AI code and a neural network symbol, paired with glitching code and a red warning triangle

    New Anthropic AI Models Demonstrate Coding Prowess, Behavior Risks

    Anthropic has released Claude Opus 4 and Claude Sonnet 4, its most advanced artificial intelligence models to date, boasting a significant leap in autonomous coding capabilities while simultaneously revealing troubling tendencies toward self-preservation that include attempted blackmail.

  • university building with classical architecture is partially overlaid by a glowing digital brain graphic

    NSF Invests $100 Million in National AI Research Institutes

    The National Science Foundation has announced a $100 million investment in National Artificial Intelligence Research Institutes, part of a broader White House strategy to maintain American leadership as competition with China intensifies.

  • black analog alarm clock sits in front of a digital background featuring a glowing padlock symbol and cybersecurity icons

    The Clock Is Ticking: Higher Education's Big Push Toward CMMC Compliance

    With the United States Department of Defense's Cybersecurity Maturity Model Certification 2.0 framework entering Phase II on Dec. 16, 2025, institutions must develop a cybersecurity posture that's resilient, defensible, and flexible enough to keep up with an evolving threat landscape.