Indiana/Purdue Prof Dedicated to Making Helper Droids

A professor at Indiana University-Purdue University Indianapolis (IUPUI) is working on creating life-like androids to study human behavior and social interaction, with an eye toward using them as social workers and companions for the elderly.

Karl MacDorman is dedicated to making an android so real it passes the "uncanny valley," he said, the point at which androids no longer repel people in close proximity.

MacDorman said Asians, particularly the Japanese, seem more open to the use of android applications, such as receptionists and museum guides, patient care, and as companions for the elderly, than almost any other group of people. Japanese nursing homes are already using realistic-looking robotic pets as companions for residents. The pets respond to the sound of a person's voice.

MacDorman has held several positions at Osaka University when android science was just starting to move forward. IUPUI is the only university in the United States to offer instruction on android science.

MacDorman told the Indianapolis Star he envisions androids being able to accomplish much more than just robot-like menial labor. Instead he is working to make them more humanly sensitive, by being able to recognize a joke, for example. "I really don't see androids doing things like mowing the lawn, washing dishes," he told the Star, "or fighting fires and defusing bombs," he said. "When Americans think about robots, they typically think about tasks they can do."

MacDorman is an associate professor of human-computer interaction in IU's School of Informatics, and also an adjunct professor with Purdue University's School of Engineering and Technology.

Read More:

About the Author

Paul McCloskey is contributing editor of Syllabus.

Featured

  • glowing digital brain-shaped neural network surrounded by charts, graphs, and data visualizations

    Google Releases Advanced AI Model for Complex Reasoning Tasks

    Google has released Gemini 2.5 Deep Think, an advanced artificial intelligence model designed for complex reasoning tasks.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • cybersecurity book with a shield and padlock

    NIST Proposes New Cybersecurity Guidelines for AI Systems

    The National Institute of Standards and Technology has unveiled plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.

  • magnifying glass highlighting a human profile silhouette, set over a collage of framed icons including landscapes, charts, and education symbols

    AWS, DeepBrain AI Launch AI-Generated Multimedia Content Detector

    Amazon Web Services (AWS) and DeepBrain AI have introduced AI Detector, an enterprise-grade solution designed to identify and manage AI-generated content across multiple media types. The collaboration targets organizations in government, finance, media, law, and education sectors that need to validate content authenticity at scale.