Research: Slight Changes to Appearance of Privacy Warnings Significantly Improves Attention

attention message bubble in smartphone

New research from a team at Brigham Young University finds that people tend to tune out security warnings as they see them more often. Conducted by information systems professors Anthony Vance, Bonnie Anderson and Jeff Jenkins, the study was funded by the National Science Foundation and follows on previous work the researchers have conducted with Brock Kirwan, a neuroscience professor at BYU.

"The problem — and something everyone has experienced — is that warnings just fade away and disappear over time in our consciousness because we're exposed to them so often," said Vance, lead author on the study, in a prepared statement.

Previously, the team had looked at snapshots of user attention and neural response. This time, the researchers combined a five-day lab experiment that tracked neural and visual responses to security warnings with a three-week field experiment that observed users interacting with their devices naturally and tracked their responses to privacy permissions warnings.

The field study required participants to install and evaluate three apps from the Android Play store each day for 15 days. Warnings popped up for each app listing any permissions the app requested related to accessing or modifying data, with some, such as "Sell your web-browsing data" or "Record microphone audio any time," representing significant risk. Some participants received warnings that looked the same every time, while others received warnings that changed in appearance each time.

Users who received the same warnings each time adhered to the warnings 55 percent of the time at the end of the study, while those who received the shifting warnings adhered to them 76 percent of the time.

"Even using a few variations can have a substantial effect over time," said Anderson, chair of the BYU Department of information Systems, in a prepared statement. "The trick is to get the variations to the point where people pay attention without being annoyed."

The lab component of the study seems to back up those findings, as it showed reduced neural activity and eye movement with repeated static-appearance warnings and a significant increase in sustained attention for the polymorphic warnings.

"System designers need to understand this is how the brain works, and they need to be as judicious as possible with the number of warnings they present," Vance said. "Secondly, if they can add some visual novelty to the warning, that really helps the brain recapture attention."

The study, "Tuning out Security Warnings: A Longitudinal Examination of Habituation through fMRI, Eye Tracking and Field Experiments," is published in the June issue of MIS Quarterly.

About the Author

Joshua Bolkan is contributing editor for Campus Technology, THE Journal and STEAM Universe. He can be reached at [email protected].

Featured

  • computer with a red warning icon on its screen, surrounded by digital grids, glowing neural network patterns, and a holographic brain

    Report Highlights Security Risks of Open Source AI

    In these days of rampant ransomware and other cybersecurity exploits, security is paramount to both proprietary and open source AI approaches — and here the open source movement might be susceptible to some inherent drawbacks, such as use of possibly insecure code from unknown sources.

  • The AI Show

    Register for Free to Attend the World's Greatest Show for All Things AI in EDU

    The AI Show @ ASU+GSV, held April 5–7, 2025, at the San Diego Convention Center, is a free event designed to help educators, students, and parents navigate AI's role in education. Featuring hands-on workshops, AI-powered networking, live demos from 125+ EdTech exhibitors, and keynote speakers like Colin Kaepernick and Stevie Van Zandt, the event offers practical insights into AI-driven teaching, learning, and career opportunities. Attendees will gain actionable strategies to integrate AI into classrooms while exploring innovations that promote equity, accessibility, and student success.

  • a professional worker in business casual attire interacting with a large screen displaying a generative AI interface in a modern office

    Study: Generative AI Could Inhibit Critical Thinking

    A new study on how knowledge workers engage in critical thinking found that workers with higher confidence in generative AI technology tend to employ less critical thinking to AI-generated outputs than workers with higher confidence in personal skills.

  • university building with classical columns and a triangular roof displayed on a computer screen, surrounded by minimalist tech elements like circuit lines and abstract digital shapes

    Pima Community College Launches New Portal for a Unified Digital Campus Experience

    Arizona's Pima Community College is elevating the digital campus experience for students, faculty, and staff with a new portal built on the Pathify digital engagement platform.