Researchers Land Nearly $600,000 to Study Ethics of Self-Driving Cars

Researchers at the University of Massachusetts, Lowell, Manhattan College and California Polytechnic State University have won $556,000 from the National Science Foundation to study the ethics of self-driving cars.

"You could program a car to minimize the number of deaths or life-years lost in any situation, but then something counterintuitive happens: When there's a choice between a two-person car and you alone in your self-driving car, the result would be to run you off the road," said Nicholas Evans, assistant professor of philosophy at UMass Lowell and principle investigator on the grant, according to a news report from UMass Lowell. "People are much less likely to buy self-driving vehicles if they think theirs might kill them on purpose and be programmed to do so."

Dubbed "Ethical Algorithms in Autonomous Vehicles," the project has two main goals:

  • The development of ethical algorithms for use in self-driving cars, based upon existing literature around decision-making in autonomous vehicles; and
  • Development of a model of the projected health outcomes resulting from the implementation of their algorithms.

"Both aims support training of scholars and practitioners sensitive to the ethical implications of autonomous vehicles," according to the project's abstract. "These pedagogical aspects have been designed to promote diverse interactions between STEM students and practitioners, and they will serve to improve STEM education and educator development."

Other issues the team plans to explore include the role of insurance companies in the design of algorithms for autonomous vehicles; what proportion of vehicles on the road need to be autonomous to reduce accidents; and potential cybersecurity issues.

About the Author

Joshua Bolkan is contributing editor for Campus Technology, THE Journal and STEAM Universe. He can be reached at [email protected].

Featured

  • student reading a book with a brain, a protective hand, a computer monitor showing education icons, gears, and leaves

    4 Steps to Responsible AI Implementation

    Researchers at the University of Kansas Center for Innovation, Design & Digital Learning (CIDDL) have published a new framework for the responsible implementation of artificial intelligence at all levels of education.

  • glowing digital brain interacts with an open book, with stacks of books beside it

    Federal Court Rules AI Training with Copyrighted Books Fair Use

    A federal judge ruled this week that artificial intelligence company Anthropic did not violate copyright law when it used copyrighted books to train its Claude chatbot without author consent, but ordered the company to face trial on allegations it used pirated versions of the books.

  • server racks, a human head with a microchip, data pipes, cloud storage, and analytical symbols

    OpenAI, Oracle Expand AI Infrastructure Partnership

    OpenAI and Oracle have announced they will develop an additional 4.5 gigawatts of data center capacity, expanding their artificial intelligence infrastructure partnership as part of the Stargate Project, a joint venture among OpenAI, Oracle, and Japan's SoftBank Group that aims to deploy 10 gigawatts of computing capacity over four years.

  • laptop displaying a phishing email icon inside a browser window on the screen

    Phishing Campaign Targets ED Grant Portal

    Threat researchers at cybersecurity company BforeAI have identified a phishing campaign spoofing the U.S. Department of Education's G5 grant management portal.