Pew Research: Americans Feel More Concerned Than Excited About AI

According to recent research conducted by the Pew Research Center in 2023, 52% of Americans feel "more concerned than excited" about the use of AI in daily life, compared to 37% in 2021 — an increase of 15 percentage points.

The research team surveyed 11,201 U.S. adults from July 31 to Aug. 6, 2023. Participants were contacted randomly by the national online survey American Trends Panel.

In 2021, 45% of participants were equally excited and concerned about the use of AI, with 18% more excited than concerned, and 37% were more concerned than excited. That number did not change significantly in 2022 (46% equally concerned/excited, 15% more excited, and 38% more concerned), but this year those percentages have drastically changed, with 36% equally concerned/excited, 10% excited, and 52% concerned.

Pew said concern about AI outweighs excitement across the major demographic groups polled: gender, race, ethnicity, partisan affiliation, education, and others. Perhaps predictably, 61% of older adults (65+) are more concerned than excited, with the gap being smaller among 18 to 29-year-olds, but still significant: 42% more concerned, and 17% more excited.

The research also shows that growing public awareness about AI is keeping pace with rising concerns.

"Those who have heard a lot about AI are 16 points more likely now than they were in December 2022 to express greater concern than excitement about it," the report noted. "Among this most aware group, concern now outweighs excitement by 47% to 15%. In December, this margin was 31% to 23%." Levels of concern seem to be about equal whether people have heard a lot about AI (16%) or not much (19%).

Americans' concerns focus heavily on maintaining control of AI, doubts about whether it will improve our lives, and use of it in certain fields, such as medicine, the report said. A large concern is also data privacy and safety with the use of AI, with 53% of survey respondents feeling their information is not being kept safe and private.

Visit the report page to read more about the results and follow links to the methodology.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • student reading a book with a brain, a protective hand, a computer monitor showing education icons, gears, and leaves

    4 Steps to Responsible AI Implementation

    Researchers at the University of Kansas Center for Innovation, Design & Digital Learning (CIDDL) have published a new framework for the responsible implementation of artificial intelligence at all levels of education.

  • three glowing stacks of tech-themed icons

    Research: LLMs Need a Translation Layer to Launch Complex Cyber Attacks

    While large language models have been touted for their potential in cybersecurity, they are still far from executing real-world cyber attacks — unless given help from a new kind of abstraction layer, according to researchers at Carnegie Mellon University and Anthropic.

  • Hand holding a stylus over a tablet with futuristic risk management icons

    Why Universities Are Ransomware's Easy Target: Lessons from the 23% Surge

    Academic environments face heightened risk because their collaboration-driven environments are inherently open, making them more susceptible to attack, while the high-value research data they hold makes them an especially attractive target. The question is not if this data will be targeted, but whether universities can defend it swiftly enough against increasingly AI-powered threats.

  • magnifying glass revealing the letters AI

    New Tool Tracks Unauthorized AI Usage Across Organizations

    DevOps platform provider JFrog is taking aim at a growing challenge for enterprises: users deploying AI tools without IT approval.