Carnegie Mellon Gives Privacy Grade to Android Apps

Google Maps gets an A. The free version of Angry Birds gets a C. And My ABCs by BabyBus gets a D. The letters assigned to each of these Android apps are grades, and while A is great, D means failure — in privacy, that is.

Those grades and a million others were assigned through a scanning application that combines automated techniques with crowdsourcing to capture the behavior of an app and measure the gap that exists between how people expect the app to behave and how it actually behaves. For example, while people expect apps such as Google Maps to use location data from the smartphone, there's little reason for a game like Angry Birds or an educational app such as My ABCs to read phone status and location and gain network access other than to identify users for market and customer analysis and deliver targeted advertising.

That's why a research team at Carnegie Mellon University has launched PrivacyGrade, a Web site that shares privacy summaries that highlight the most unexpected behaviors of an app. The goal is to help smartphone users manage their privacy better and with more thought.

"These apps access information about a user that can be highly sensitive, such as location, contact lists and call logs, yet it often is difficult for the average user to understand how that information is being used or who it might be shared with," said Jason Hong, associate professor in the Human-Computer Interaction Institute, and primary investigator for the project in the Computer Human Interaction: Mobility Privacy Security (CHIMPS) Lab. "Our privacy model measures the gap between people's expectations of an app's behavior and the app's actual behavior.

PrivacyGrade also examines which third-party code libraries make use of the resources culled by the app. If the app accesses location data, the program checks to see if it's used by a library such as Google Maps, suggesting it is simply being used for mapping, or if it is being used by an advertising library, an indication that it will be used for targeted ads.

The application doesn't currently include paid apps, since the presumption is that because the developers receive income from sales, they're less likely to sell user data to other companies. Eventually, the CHIMPS team may add additional apps to the site, for iOS, Windows Mobile and Blackberry, if funding permits.

The work was funded through a National Science Foundation grant, as well as the Army Research Office, NQ Mobile and Google through its faculty award program.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • abstract, minimalist illustration of cloud AI tools

    Amazon Program Offers Free AI Tools to Public Sector Organizations

    Amazon Web Services has announced a $50 million fund that will give public sector organizations a chance to tap into its portfolio of cloud-based AI tools.

  • abstract pattern of interlocking circuits, hexagons, and neural network shapes

    Anthropic Announces Cautious Support for New California AI Regulation Legislation

    Anthropic has announced its support for an amended version of California’s Senate Bill 1047 (SB 1047), the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," because of revisions to the bill the company helped to influence, but not without some reservations.

  • Abstract geometric pattern with interconnected nodes and lines

    Microsoft 365 Copilot Gets Expanded AI Capabilities, Collaboration Tools

    Microsoft has announced the next updates to its Microsoft 365 Copilot AI assistant, including expanded AI capabilities in individual apps, the ability to create autonomous agents, and a new AI-powered collaboration workspace.

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs AI Content Safeguards into Law

    California Governor Gavin Newsom has officially signed off on a series of landmark artificial intelligence bills, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.