UCSB Security Researchers To Help Too-Trusting Smartphone App Users

There's little assurance right now that the app you've just downloaded to your Android phone is safe. It could be the gateway through which a cybercriminal is pulling off important pieces of data about you and your contacts to build up profiles worth selling to other criminals. Little is known about the "trust relationships" that exist among users, the smartphone platform and the surrounding ecosystem, including smartphone apps and the app markets. But a research team at the University of California, Santa Barbara has received a $1.1 million grant from the National Science Foundation to research the topic.

"The victims of these types of malware and scams could be counted in the hundreds of millions," said Giovanni Vigna, a professor of computer science who will be the principal investigator on the project. "The thing we'll be seeing more and more are attempts to violate trust assumptions."

Vigna, who is also the director of the Center for CyberSecurity in the College of Engineering, will be working with Computer Science Professor Christopher Kruegel to develop a framework for understanding trust relationships in this smartphone ecosystem in order to understand the weaknesses. Those include situations in which trust is misplaced as well as points where trust vulnerabilities exist. s

For example, an app page may use icons to suggest the authenticity of the site or the security of the app file; or recognizable logos from trusted organizations may appear on the site or app without an actual connection to the trusted brand.

"People use their phones to click on the Facebook icon, for instance, and the Facebook application starts, and they inherently assume that it's Facebook running on their phone," Vigna said. He and his team have discovered that users will also click on an icon that feels familiar but leads to a faux application intended to do harm.

The researchers expect to examine include the relationship between the malware writer and the app store that publishes his or her app; the user who trusts the app store enough to download the app; and the developer who relies on a particular ad framework to display ads through the app, which then begins including links to additional malware. "Where's the trust there? How do you control this trust? How can you be assured that the ad network is going to perform as stated?" said Vigna.

The research also hopes to develop techniques to prevent or detect and mitigate trust violations. Initially, the group will focus on Android apps in particular, but they insist that the results will be general and applicable to other smartphone platforms as well.

"Android is a wonderful open platform that allows anybody to do anything--including hacking the cellphones of unsuspecting Android users," said Vigna. He added that Apple iOS is less vulnerable.

Also, the team may develop an app that users can use to analyze the behavior of other apps to report their flaws or potential untrustworthiness.

Until the research is done, Vigna offers several recommendations:

  • Stick to the "better known app markets," and stay away from other third-party sites;
  • Before downloading an app, consider the number of downloads it has; millions is a more trustworthy count than hundreds or a few thousand;
  • If the app doesn't work when you've downloaded it, it could turn out to be a bit of malicious code sucking up user information. Uninstall apps that don't work;
  • Carefully check that you're getting what you want. "Angry Bords" isn't from Rovio, and the results from installing it may be far more harmful than egg-stealing pigs.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • From the Kuali Days 2025 Conference: A CEO's View of Planning for AI

    How can a company serving higher education navigate the changes AI brings to ed tech? What will customers expect? CT talks with Kuali CEO Joel Dehlin, who shared his company's AI strategies with attendees at Kuali Days 2025 in Anaheim.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  •  floating digital interface with glowing icons, surrounded by faint geometric shapes

    Digital Education Council Defines 5 Dimensions of AI Literacy

    A recent report from the Digital Education Council, a global community devoted to "revolutionizing the world of education and work through technology and collaboration," provides an AI literacy framework to help higher education institutions equip their constituents with foundational AI competencies.

  • cloud and circuit patterns with AI stamp

    Cloud Management Startup Launches Infrastructure Intelligence Tool

    A new AI-powered infrastructure intelligence tool from cloud management startup env0 aims to turn the fog of sprawling, enterprise-scale deployments into crisp, queryable insight, minus the spreadsheets, scripts, and late-night Slack threads.