Penn State Researchers Tackle Social Network Privacy Gaps

Researchers at Pennsylvania State University's College of Information Science and Technology (IST) and the University of Kansas have partnered in an effort to reduce the gap between perceived and actual privacy for users of social networks.

That gap arises when what users intend to share differs from the information that is actually made available to others.

"People don't clearly understand the boundaries of personal information versus sharing boundaries," said Dongwon Lee, associate professor at IST and principal investigator for the project, in a prepared statement.

Dubbed "Privacy Protection in Social Networks: Bridging the Gap Between User Perception and Privacy Enforcement," the project seeks to develop methods to identify those discrepancies, "design a user-centered and computationally efficient formal model of user privacy in social networks" and develop a mechanism for enforcing privacy policies, according to information released by Penn State.

In addition to infiltrating social networks to steal personal information, "hackers can connect an identity-revealing clue from [a] medical site with a publicly known identity in social media accounts, enabling them to access information that was intended to be private," according to a news release about the project.

Additionally, even users concerned about privacy and aware of possible consequences fail to take protective measures because they don't believe the risk is worth the extra vigilance, according to Lee.

Previous efforts to address the problem have relied either on technological solutions or human-oriented fixes. Lee said his project will work to combine the two approaches.

"We feel that if we take advantage of both frameworks, we'll be able to come up with a better solution," Lee said, in a Penn State news release.

Once complete, the researchers said they hope to implement their tools in a way that will allow users to more easily control their privacy, such as through an app that would work with various social media accounts.

"Hopefully, we will develop better, very vigorous underpinnings of the privacy model and a slew of technological tools to enforce this newly developed model," added Lee.

The research is being funded through a $279,154 grant to IST and a $220,162 grant to U Kansas, both from the National Science Foundation.

About the Author

Joshua Bolkan is contributing editor for Campus Technology, THE Journal and STEAM Universe. He can be reached at [email protected].

Featured

  • glowing digital brain-shaped neural network surrounded by charts, graphs, and data visualizations

    Google Releases Advanced AI Model for Complex Reasoning Tasks

    Google has released Gemini 2.5 Deep Think, an advanced artificial intelligence model designed for complex reasoning tasks.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • cybersecurity book with a shield and padlock

    NIST Proposes New Cybersecurity Guidelines for AI Systems

    The National Institute of Standards and Technology has unveiled plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.

  • magnifying glass highlighting a human profile silhouette, set over a collage of framed icons including landscapes, charts, and education symbols

    AWS, DeepBrain AI Launch AI-Generated Multimedia Content Detector

    Amazon Web Services (AWS) and DeepBrain AI have introduced AI Detector, an enterprise-grade solution designed to identify and manage AI-generated content across multiple media types. The collaboration targets organizations in government, finance, media, law, and education sectors that need to validate content authenticity at scale.