Who Do We Trust to Develop and Manage AI?

artificial intelligence

An intriguing survey on American attitudes to artificial intelligence found that more people in this country support development of AI (41 percent) than oppose it (22 percent). But there's no consensus on who should handle its governance: Americans place the greatest amount of trust in university researchers to build AI (50 percent), followed by the U.S. military (49 percent).

The research was led by Baobao Zhang and Allan Dafoe, both with the Center for the Governance of AI in the Future of Humanity Institute, which is part of the University of Oxford. The survey project interviewed 2,387 respondents who were then matched to a sample of 2,000, framed by gender, age, race, education and other factors, such as religion and political affiliation, to reflect the make-up of the country.

According to "Artificial Intelligence: American Attitudes and Trends," university researchers and the U.S. military are the most trusted groups to develop AI. About half of Americans expressed a "great deal" or even a "fair amount" of confidence in them. Americans showed slightly less confidence in tech companies, nonprofit organizations or American intelligence organizations. Nevertheless, opinions toward individual entities within each of these groups varied. For example, while 44 percent of Americans indicated they feel a "great deal" or even a "fair amount" of confidence in AI development the most well-known tech companies, they rated Facebook as the least trustworthy of all. More than four in 10 indicated they have no confidence in the company.

Also, support for developing AI varied greatly among demographic subgroups. It was highest among those who are younger, male, white, more educated, employed, with greater income, Democratic and educated or experienced on technology.

In spite of those distinctions, the "overwhelming majority" of Americans — more than eight in 10 — agreed that AI and/or robots should be carefully managed, while only 6 percent disagreed. But who should manage it? Once again, university researchers and the U.S. military came out on top. Fifty percent and 49 percent, respectively, of respondents expressed a "great deal" or "fair amount" of confidence in these institutional researchers and the defense branch of the government to handle the job, compared to just 30 percent showing comparable confidence in the U.S. civilian government. Among international entities, intergovernmental research organizations such as CERN beat out NATO: 41 percent to 29 percent. Among tech companies (proposed by 44 percent of survey participants, Microsoft scored 44 percent, beating every other contender, including Amazon (41 percent), Google (39 percent) and Apple (36 percent). One notable company came out on the bottom; again, just 18 percent of respondents said they were greatly or fairly confident that Facebook could manage AI.

These findings echoed surveys undertaken by other organizations. For example, the researchers noted, an "overwhelming majority" of Americans told Pew Research Center surveyors they had "a great deal" or "a fair amount" of confidence in the U.S. military and scientists to act in the best interest of the public. For elected officials, not so much; 73 percent of respondents indicated that they had "not too much" or "no confidence" in them.

The research project also tried to understand which AI governance challenges Americans think will most likely affect "large numbers of people" over the next decade and therefore are the ones that tech companies and government will need to tackle. The issues receiving the most attention were surveillance, digital manipulation, data privacy and cyber attacks. Data privacy also was at the top of the stack as being the most important area of risk.

In an interesting side note, the researchers discovered that Americans believed that all of the governance challenges other than data privacy and ensuring the safety of autonomous vehicles were more likely to impact people around the world than to affect people in the United States.

The recent survey also found that respondents underestimated the prevalence of AI, machine learning and robotics in their everyday technological applications. Among those uses assigned the term "AI," a majority thought that virtual assistants (63 percent), smart speakers (55 percent), driverless cars (56 percent), social robots (64 percent) and autonomous drones use AI (54 percent). And a majority of respondents also assumed that Facebook photo tagging, Google Search, Netflix or Amazon recommendations or Google Translate don't use AI.

The results of the survey are openly available on the Center for the Governance of AI website in a PDF format and as an HTML edition.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • college student sitting at a laptop writing a college essay

    How Can Schools Manage AI in Admissions?

    Many questions remain around the role of artificial intelligence in admissions as schools navigate the balance between innovation and integrity.  

  • a hobbyist in casual clothes holds a hammer and a toolbox, building a DIY structure that symbolizes an AI model

    Ditch the DIY Approach to AI on Campus

    Institutions that do not adopt AI will quickly fall behind. The question is, how can colleges and universities do this systematically, securely, cost-effectively, and efficiently?

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs AI Content Safeguards into Law

    California Governor Gavin Newsom has officially signed off on a series of landmark artificial intelligence bills, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.

  • laptop screen showing Coursera course

    Coursera Introduces New Gen AI Skills Training and Credentials

    Learning platform Coursera is expanding its Generative AI Academy training portfolio with an offering for teams, as well as adding new generative AI courses, specializations, and certificates.