For Mobile Users, Positive Safety Messages More Effective Than Security Warnings

Ratings of the security risks associated with smartphone apps affect users' decisions whether to install those apps, but information about the safety of an app is more effective than information about its risks, according to researchers from Purdue University.

The report, "Effective Risk Communication for Android Apps," was published in the May-June issue of IEEE Transactions on Dependable and Secure Computing. The researchers examined the effects of including information about app permissions on users' decisions to install apps. They tested the effectiveness of including summary risk information and tested various methods of conveying that information to determine which approach was most effective.

Although most mobile systems have strong security measures in place, they often rely on users to make decisions that affect the security of the device, according to the authors. When users install apps, they may unwittingly give permission for malicious or intrusive apps to track their location and monitor their phone calls and text messages, including authentication messages used by secure sites. According to the researchers, users install these malicious apps without realizing the risks because they don't understand the permissions the app is requesting.

The researchers focused on the Android operating system, which includes more than 200 app permissions, many of which "do not make sense to the average user or at best require time and considerable mental effort to comprehend," according to information on the National Science Foundation site, which funded the project. While users pay some attention to permissions, they also consider average ratings, number of downloads and user comments. Higher quality apps tend to get higher ratings, and users tend to submit comments about the security and privacy of an app.

Current app permissions are designed for the app developers, rather than the users, Ninghui Li, one of the researchers told NSF. Based on the results of their experiments, the researchers believe it would be more effective to display a risk score for each app because it would make the risk more obvious to users and provide an incentive for developers to reduce their use of personal information when developing apps. They also believe the inclusion of risk scores could increase user curiosity about security information and cause them to pay more attention to the warnings.

However, the researchers also found that people tend to pay more attention to safety information than risk information. The reason may be that users tend to base their decision to install an app on other positive information about it, such as the user ratings, number of downloads and user comments, so it follows that a positive safety rating is more compatible with the decision-making process than a negative risk rating.

The full report, "Effective Risk Communication for Android Apps," can be found in the May-June issue of IEEE Transactions on Dependable and Secure Computing.

About the Author

Leila Meyer is a technology writer based in British Columbia. She can be reached at [email protected].

Featured

  • data professionals in a meeting

    Data Fluency as a Strategic Imperative

    As an institution's highest level of data capabilities, data fluency taps into the agency of technical experts who work together with top-level institutional leadership on issues of strategic importance.

  • stylized AI code and a neural network symbol, paired with glitching code and a red warning triangle

    New Anthropic AI Models Demonstrate Coding Prowess, Behavior Risks

    Anthropic has released Claude Opus 4 and Claude Sonnet 4, its most advanced artificial intelligence models to date, boasting a significant leap in autonomous coding capabilities while simultaneously revealing troubling tendencies toward self-preservation that include attempted blackmail.

  • university building with classical architecture is partially overlaid by a glowing digital brain graphic

    NSF Invests $100 Million in National AI Research Institutes

    The National Science Foundation has announced a $100 million investment in National Artificial Intelligence Research Institutes, part of a broader White House strategy to maintain American leadership as competition with China intensifies.

  • black analog alarm clock sits in front of a digital background featuring a glowing padlock symbol and cybersecurity icons

    The Clock Is Ticking: Higher Education's Big Push Toward CMMC Compliance

    With the United States Department of Defense's Cybersecurity Maturity Model Certification 2.0 framework entering Phase II on Dec. 16, 2025, institutions must develop a cybersecurity posture that's resilient, defensible, and flexible enough to keep up with an evolving threat landscape.