Multimodal Biometrics Strengthen Mobile Security

As smartphones, tablets, wearables and other mobile devices become more ubiquitous, mobile security remains a central question, leaving consumers to wonder: What is a reliable way to protect personal information stored on mobile devices?

Passwords and PIN numbers for smartphones provide weak security. Certain biometrics (such as face, fingerprint and voice recognition) can boost mobile security, but such authentication schemes can also be broken. When users combine multiple modalities, however, security significantly improves, according to researchers at California State University, Fullerton (CSUF).

In a recent interview with the Orange County Register, CSUF students Yu Li, Jacob Biloki, Karthik Karunanithi and Daniel Kim explained their research in mobile security, identifying multimodal biometrics as the “best-suited solution for any mobile device where high accuracy and security is required.”

The researchers took a new approach and looked at ear modalities in addition to face and fingerprint, developing a system that is user-friendly and fast. They presented their findings and won third place at the Institute of Electrical and Electronics Engineers Conference on Technologies for Sustainability, an international conference that took place early October in Phoenix, AZ.  

Finally, the researchers predicted that the mobile biometric market will continue to grow, as more sensors capable of scanning biometrics (e.g. vein patterns, retinas and 3D face images) are added to mobile devices and become standard.

About the Author

Sri Ravipati is Web producer for THE Journal and Campus Technology. She can be reached at [email protected].

Featured

  • glowing digital brain-shaped neural network surrounded by charts, graphs, and data visualizations

    Google Releases Advanced AI Model for Complex Reasoning Tasks

    Google has released Gemini 2.5 Deep Think, an advanced artificial intelligence model designed for complex reasoning tasks.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • cybersecurity book with a shield and padlock

    NIST Proposes New Cybersecurity Guidelines for AI Systems

    The National Institute of Standards and Technology has unveiled plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.

  • magnifying glass highlighting a human profile silhouette, set over a collage of framed icons including landscapes, charts, and education symbols

    AWS, DeepBrain AI Launch AI-Generated Multimedia Content Detector

    Amazon Web Services (AWS) and DeepBrain AI have introduced AI Detector, an enterprise-grade solution designed to identify and manage AI-generated content across multiple media types. The collaboration targets organizations in government, finance, media, law, and education sectors that need to validate content authenticity at scale.