Research: Slight Changes to Appearance of Privacy Warnings Significantly Improves Attention

attention message bubble in smartphone

New research from a team at Brigham Young University finds that people tend to tune out security warnings as they see them more often. Conducted by information systems professors Anthony Vance, Bonnie Anderson and Jeff Jenkins, the study was funded by the National Science Foundation and follows on previous work the researchers have conducted with Brock Kirwan, a neuroscience professor at BYU.

"The problem — and something everyone has experienced — is that warnings just fade away and disappear over time in our consciousness because we're exposed to them so often," said Vance, lead author on the study, in a prepared statement.

Previously, the team had looked at snapshots of user attention and neural response. This time, the researchers combined a five-day lab experiment that tracked neural and visual responses to security warnings with a three-week field experiment that observed users interacting with their devices naturally and tracked their responses to privacy permissions warnings.

The field study required participants to install and evaluate three apps from the Android Play store each day for 15 days. Warnings popped up for each app listing any permissions the app requested related to accessing or modifying data, with some, such as "Sell your web-browsing data" or "Record microphone audio any time," representing significant risk. Some participants received warnings that looked the same every time, while others received warnings that changed in appearance each time.

Users who received the same warnings each time adhered to the warnings 55 percent of the time at the end of the study, while those who received the shifting warnings adhered to them 76 percent of the time.

"Even using a few variations can have a substantial effect over time," said Anderson, chair of the BYU Department of information Systems, in a prepared statement. "The trick is to get the variations to the point where people pay attention without being annoyed."

The lab component of the study seems to back up those findings, as it showed reduced neural activity and eye movement with repeated static-appearance warnings and a significant increase in sustained attention for the polymorphic warnings.

"System designers need to understand this is how the brain works, and they need to be as judicious as possible with the number of warnings they present," Vance said. "Secondly, if they can add some visual novelty to the warning, that really helps the brain recapture attention."

The study, "Tuning out Security Warnings: A Longitudinal Examination of Habituation through fMRI, Eye Tracking and Field Experiments," is published in the June issue of MIS Quarterly.

About the Author

Joshua Bolkan is contributing editor for Campus Technology, THE Journal and STEAM Universe. He can be reached at [email protected].

Featured

  • AI robot with cybersecurity symbol on its chest

    Microsoft Adds New Agentic AI Tools to Security Copilot

    Microsoft has announced a major expansion of its AI-powered cybersecurity platform, introducing a suite of autonomous agents to help organizations counter rising threats and manage the growing complexity of cloud and AI security.

  • modern college building with circuit and brain motifs

    Anthropic Launches Claude for Education

    Anthropic has announced a version of its Claude AI assistant tailored for higher education institutions. Claude for Education "gives academic institutions secure, reliable AI access for their entire community," the company said, to enable colleges and universities to develop and implement AI-enabled approaches across teaching, learning, and administration.

  • central cloud platform connected to various AI icons—including a brain, robot, and network nodes

    Linux Foundation to Host Protocol for AI Agent Interoperability

    The Linux Foundation has announced it will host the Agent2Agent (A2A) protocol project, an open standard originally developed by Google to support secure communication and interoperability among AI agents.

  • open laptop in a college classroom with holographic AI icons like a brain and data charts rising from the screen

    4 Ways Universities Are Using Google AI Tools for Learning and Administration

    In a recent blog post, Google shared an array of education customer stories, showcasing ways institutions are using AI tools like Gemini and NotebookLM to transform both learning and administrative tasks.