Research Finds that Anonymized Mobile Data Still Leads to Privacy Risks

mobile location sharing

When you allow an app to identify your current location through your mobile device, is the result being used to optimize your experience or putting your private data at risk? That's the question behind a study undertaken by researchers at MIT and Imperial College London, who recently published their findings in IEEE's Transactions on Big Data.

According to MIT's Daniel Kondor, Behrooz Hashemian and Carlo Ratti and Imperial College's Yves-Alexandre de Montjoye, the compilation of massive, anonymized datasets detailing people's movement patterns through their location stamps can be used for "nefarious purposes." Given a few randomly selected points in mobility datasets, someone could identify and learn sensitive information about those individuals behind the data. This is accomplished by merging one dataset that's anonymized with another that's not anonymized to reveal what's being hidden.

The researchers proved their point by performing user "matchability" with two large datasets from Singapore generated in 2011: one containing timestamps and geographic coordinates from 485 million records generated by 2 million users from a mobile network operator, and one containing 70 million records with timestamps for individuals moving around the city within a local transportation system.

The researchers applied statistical modelling to location stamps of users in both of the datasets to come up with a probability that data points in both sets originated from the same individual. Initially, the model could expect to match about 17 percent of individuals in one week's worth of data; after four weeks, it could probably match more than 55 percent. The estimate rose to about 95 percent with data compiled over 11 weeks. The main determinant of matchability was the expected number of co-occurring records in the two datasets.

"As researchers, we believe that working with large-scale datasets can allow discovering unprecedented insights about human society and mobility, allowing us to plan cities better," said Kondor, a postdoc in the Future Urban Mobility Group at the Singapore-MIT Alliance for Research and Technology (SMART). "Nevertheless, it is important to show if identification is possible, so people can be aware of potential risks of sharing mobility data."

Ratti, a professor of the practice in MIT's Department of Urban Studies and Planning and director of MIT's Senseable City Lab, offered an example: "I was at Sentosa Island in Singapore two days ago, came to the Dubai airport yesterday and am on Jumeirah Beach in Dubai today. It's highly unlikely another person's trajectory looks exactly the same. In short, if someone has my anonymized credit card information and perhaps my open location data from Twitter, they could then deanonymize my credit card data."

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • glowing digital brain-shaped neural network surrounded by charts, graphs, and data visualizations

    Google Releases Advanced AI Model for Complex Reasoning Tasks

    Google has released Gemini 2.5 Deep Think, an advanced artificial intelligence model designed for complex reasoning tasks.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • cybersecurity book with a shield and padlock

    NIST Proposes New Cybersecurity Guidelines for AI Systems

    The National Institute of Standards and Technology has unveiled plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.

  • magnifying glass highlighting a human profile silhouette, set over a collage of framed icons including landscapes, charts, and education symbols

    AWS, DeepBrain AI Launch AI-Generated Multimedia Content Detector

    Amazon Web Services (AWS) and DeepBrain AI have introduced AI Detector, an enterprise-grade solution designed to identify and manage AI-generated content across multiple media types. The collaboration targets organizations in government, finance, media, law, and education sectors that need to validate content authenticity at scale.