Research Finds that Anonymized Mobile Data Still Leads to Privacy Risks

mobile location sharing

When you allow an app to identify your current location through your mobile device, is the result being used to optimize your experience or putting your private data at risk? That's the question behind a study undertaken by researchers at MIT and Imperial College London, who recently published their findings in IEEE's Transactions on Big Data.

According to MIT's Daniel Kondor, Behrooz Hashemian and Carlo Ratti and Imperial College's Yves-Alexandre de Montjoye, the compilation of massive, anonymized datasets detailing people's movement patterns through their location stamps can be used for "nefarious purposes." Given a few randomly selected points in mobility datasets, someone could identify and learn sensitive information about those individuals behind the data. This is accomplished by merging one dataset that's anonymized with another that's not anonymized to reveal what's being hidden.

The researchers proved their point by performing user "matchability" with two large datasets from Singapore generated in 2011: one containing timestamps and geographic coordinates from 485 million records generated by 2 million users from a mobile network operator, and one containing 70 million records with timestamps for individuals moving around the city within a local transportation system.

The researchers applied statistical modelling to location stamps of users in both of the datasets to come up with a probability that data points in both sets originated from the same individual. Initially, the model could expect to match about 17 percent of individuals in one week's worth of data; after four weeks, it could probably match more than 55 percent. The estimate rose to about 95 percent with data compiled over 11 weeks. The main determinant of matchability was the expected number of co-occurring records in the two datasets.

"As researchers, we believe that working with large-scale datasets can allow discovering unprecedented insights about human society and mobility, allowing us to plan cities better," said Kondor, a postdoc in the Future Urban Mobility Group at the Singapore-MIT Alliance for Research and Technology (SMART). "Nevertheless, it is important to show if identification is possible, so people can be aware of potential risks of sharing mobility data."

Ratti, a professor of the practice in MIT's Department of Urban Studies and Planning and director of MIT's Senseable City Lab, offered an example: "I was at Sentosa Island in Singapore two days ago, came to the Dubai airport yesterday and am on Jumeirah Beach in Dubai today. It's highly unlikely another person's trajectory looks exactly the same. In short, if someone has my anonymized credit card information and perhaps my open location data from Twitter, they could then deanonymize my credit card data."

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • computer with a red warning icon on its screen, surrounded by digital grids, glowing neural network patterns, and a holographic brain

    Report Highlights Security Risks of Open Source AI

    In these days of rampant ransomware and other cybersecurity exploits, security is paramount to both proprietary and open source AI approaches — and here the open source movement might be susceptible to some inherent drawbacks, such as use of possibly insecure code from unknown sources.

  • The AI Show

    Register for Free to Attend the World's Greatest Show for All Things AI in EDU

    The AI Show @ ASU+GSV, held April 5–7, 2025, at the San Diego Convention Center, is a free event designed to help educators, students, and parents navigate AI's role in education. Featuring hands-on workshops, AI-powered networking, live demos from 125+ EdTech exhibitors, and keynote speakers like Colin Kaepernick and Stevie Van Zandt, the event offers practical insights into AI-driven teaching, learning, and career opportunities. Attendees will gain actionable strategies to integrate AI into classrooms while exploring innovations that promote equity, accessibility, and student success.

  • a professional worker in business casual attire interacting with a large screen displaying a generative AI interface in a modern office

    Study: Generative AI Could Inhibit Critical Thinking

    A new study on how knowledge workers engage in critical thinking found that workers with higher confidence in generative AI technology tend to employ less critical thinking to AI-generated outputs than workers with higher confidence in personal skills.

  • university building with classical columns and a triangular roof displayed on a computer screen, surrounded by minimalist tech elements like circuit lines and abstract digital shapes

    Pima Community College Launches New Portal for a Unified Digital Campus Experience

    Arizona's Pima Community College is elevating the digital campus experience for students, faculty, and staff with a new portal built on the Pathify digital engagement platform.