University of Phoenix Survey Explores Adults' Perceptions of AI for Use in Work and Education

The University of Phoenix commissioned a survey in July 2023 on how adults understand and feel about the use of generative AI. The survey of 2,045 adults ages 18 and older, performed online in July 2023 by Harris Poll, found that while most feel AI should be used in the workplace and classroom (59% and 57% respectively), a significant number don't understand the different forms of AI and don't feel completely comfortable about its accuracy.

Results showed that a majority of adults don't understand the different forms of AI models. An average of 25% are very or somewhat familiar with generative AI, but 49% said they have never heard of it. Other types of AI models they have never heard of include Naïve Bayes (86%), K-nearest neighbors (84%), logistic regression (75%), and decision tree (74%). Demographic breakdowns of the research show greater lack of familiarity the older the respondents.

But respondents want to learn about it, they said, with 63% wanting to read, hear, or see more information. As to how they get that information, 23% said they use social media, 18% technology websites, 14% television, and 12% news websites. Half of all respondents said they were getting the right amount of information about it.

Some other significant findings include:

  • Most adults believe 36% of jobs could be replaced by AI tools, and 19% believe more than 50% of jobs could be replaced;
  • Forty-seven percent said they think AI could help them in their jobs, while 38% said they won't be impacted, and only 15% see AI replacing them;
  • Using AI to help complete school assignments or work projects would be acceptable to 43%;
  • Of students and workers getting information to learn more about subjects, 45% said they would talk to an AI chatbot;
  • A large 74% believe AI-generated information is accurate, but only 14% believe it is "very accurate."

Would it be accurate enough to use in their school or work projects? About a third (35%) said yes, about a third (30%) said no, and about a third (35%) were not sure.

"At University of Phoenix, we see AI like any other new tool that has entered the arena of advancing knowledge acquisition with the ability to enhance a student's access to data and information to gain comprehension and competency more quickly. AI tools have the potential to do great good in the realm of distance education and online learning to create engaging learning experiences," said Marc Booker, vice provost of strategy at the university. "We activated this survey so that we could better understand perceptions of AI and consider how to develop a deeper understanding and provide guidance on responsible use in higher education and in the workplace."

Visit this page to download and read the full survey report.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.