Penn State Researchers Tackle Social Network Privacy Gaps

Researchers at Pennsylvania State University's College of Information Science and Technology (IST) and the University of Kansas have partnered in an effort to reduce the gap between perceived and actual privacy for users of social networks.

That gap arises when what users intend to share differs from the information that is actually made available to others.

"People don't clearly understand the boundaries of personal information versus sharing boundaries," said Dongwon Lee, associate professor at IST and principal investigator for the project, in a prepared statement.

Dubbed "Privacy Protection in Social Networks: Bridging the Gap Between User Perception and Privacy Enforcement," the project seeks to develop methods to identify those discrepancies, "design a user-centered and computationally efficient formal model of user privacy in social networks" and develop a mechanism for enforcing privacy policies, according to information released by Penn State.

In addition to infiltrating social networks to steal personal information, "hackers can connect an identity-revealing clue from [a] medical site with a publicly known identity in social media accounts, enabling them to access information that was intended to be private," according to a news release about the project.

Additionally, even users concerned about privacy and aware of possible consequences fail to take protective measures because they don't believe the risk is worth the extra vigilance, according to Lee.

Previous efforts to address the problem have relied either on technological solutions or human-oriented fixes. Lee said his project will work to combine the two approaches.

"We feel that if we take advantage of both frameworks, we'll be able to come up with a better solution," Lee said, in a Penn State news release.

Once complete, the researchers said they hope to implement their tools in a way that will allow users to more easily control their privacy, such as through an app that would work with various social media accounts.

"Hopefully, we will develop better, very vigorous underpinnings of the privacy model and a slew of technological tools to enforce this newly developed model," added Lee.

The research is being funded through a $279,154 grant to IST and a $220,162 grant to U Kansas, both from the National Science Foundation.

About the Author

Joshua Bolkan is contributing editor for Campus Technology, THE Journal and STEAM Universe. He can be reached at [email protected].

Featured

  • Training the Next Generation of Space Cybersecurity Experts

    CT asked Scott Shackelford, Indiana University professor of law and director of the Ostrom Workshop Program on Cybersecurity and Internet Governance, about the possible emergence of space cybersecurity as a separate field that would support changing practices and foster future space cybersecurity leaders.

  • person typing on a touch screen schedule plan calendar

    2025 Tech Tactics in Education Conference Agenda Announced

    Registration is free for this fully virtual May 7 event, focused on "Thriving in the Age of AI" in K-12 and higher education.

  • illustration of a human head with a glowing neural network in the brain, connected to tech icons on a cool blue-gray background

    Meta Launches Stand-Alone AI App

    Meta Platforms has introduced a stand-alone artificial intelligence app built on its proprietary Llama 4 model, intensifying the competitive race in generative AI alongside OpenAI, Google, Anthropic, and xAI.

  • glowing AI text box emerges from a keyboard on a desk, surrounded by floating padlocks, warning icons, and fragmented shields

    Study: 1 in 10 AI Prompts Could Expose Sensitive Data

    Nearly one in 10 prompts used by business users when interacting with generative artificial intelligence tools may inadvertently disclose sensitive data, according to a study released by data protection startup Harmonic Security Inc.