TaintDroid Research Exposes Risks of Mobile Apps

Research by two universities and a corporate R&D lab has uncovered a consistent pattern by many Android apps of transmitting private information without explicitly informing the smartphone user. The joint research by Penn State, Duke University, and Intel Labs resulted in development of TaintDroid. This real-time monitoring service works as an extension of the Google mobile phone platform to analyze how private information is obtained and released by the apps that have been downloaded and installed on the phones by their users.

The researchers tested 30 popular Droid applications that use location, camera, or microphone data and flagged 105 instances in which these applications transmitted tainted data. "Of the 105, we determined that 37 were clearly legitimate," the researchers reported in "TaintDroid: An Information-Flow Tracking System for Realtime Privacy." TaintDroid also revealed that 15 of the 30 applications reported users' locations to remote advertising servers; seven applications collected the device ID and in some cases the phone number and the SIM card serial number. "In all, two-thirds of the applications in our study used sensitive data suspiciously," wrote the authors. "Our findings demonstrate that TaintDroid can help expose potential misbehavior by third-party applications."

Although the study focused on the Android platform, the researchers believe that other platforms should be examined for "tainted" apps as well.


A video shown on the TaintDroid research project shares a demonstration of the application exposing the secret transmission of private data when the user downloads and uses a Droid wallpaper app.

"We were surprised by how many of the studied applications shared our information without our knowledge or consent," said William Enck, a Penn State graduate student of computer science and engineering. "Often, Smartphone applications have obvious user interface changes when they use information like your physical location. These cases usually occur in response to the user pressing a button with clear implications. The cases we found were suspicious because there was no obvious way for the user to know what happened or why."

After an app is downloaded but before it's fully installed, users will frequently receive a screen that asks whether they'll allow certain information to be accessed, presumably to make the apps work right. For example, local sensors on the phone, such as GPS receiver, will feed data to a remote Web service in order to deliver new forms of information to the user, such as the route a bike ride has followed. If the user declines to allow that data access during installation, the app won't be installed.

The apps rarely provide privacy policies that state how a user's sensitive information will be used, and users have no way of knowing where applications send the information given to them. As a result, the report states, "users must blindly trust that applications will properly handle their private data."

"Many of these applications access users' personal data such as location, phone information and usage history to enhance their experience," said Patrick McDaniel, an associate professor at Penn State. "But users must trust that applications will only use their privacy-sensitive information in a desirable way."

The researchers say that TaintDroid eventually will be released as a downloadable program itself. "Our findings demonstrate the effectiveness and value of enhancing Smartphone platforms with monitoring tools such as TaintDroid," they concluded. The project Web site is currently collecting names of people who are interested in working with an open source version of the code for TaintDroid.

The National Science Foundation is a financial supporter of the project.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • AI robot with cybersecurity symbol on its chest

    Microsoft Adds New Agentic AI Tools to Security Copilot

    Microsoft has announced a major expansion of its AI-powered cybersecurity platform, introducing a suite of autonomous agents to help organizations counter rising threats and manage the growing complexity of cloud and AI security.

  • modern college building with circuit and brain motifs

    Anthropic Launches Claude for Education

    Anthropic has announced a version of its Claude AI assistant tailored for higher education institutions. Claude for Education "gives academic institutions secure, reliable AI access for their entire community," the company said, to enable colleges and universities to develop and implement AI-enabled approaches across teaching, learning, and administration.

  • central cloud platform connected to various AI icons—including a brain, robot, and network nodes

    Linux Foundation to Host Protocol for AI Agent Interoperability

    The Linux Foundation has announced it will host the Agent2Agent (A2A) protocol project, an open standard originally developed by Google to support secure communication and interoperability among AI agents.

  • open laptop in a college classroom with holographic AI icons like a brain and data charts rising from the screen

    4 Ways Universities Are Using Google AI Tools for Learning and Administration

    In a recent blog post, Google shared an array of education customer stories, showcasing ways institutions are using AI tools like Gemini and NotebookLM to transform both learning and administrative tasks.