Researchers Land Nearly $600,000 to Study Ethics of Self-Driving Cars

Researchers at the University of Massachusetts, Lowell, Manhattan College and California Polytechnic State University have won $556,000 from the National Science Foundation to study the ethics of self-driving cars.

"You could program a car to minimize the number of deaths or life-years lost in any situation, but then something counterintuitive happens: When there's a choice between a two-person car and you alone in your self-driving car, the result would be to run you off the road," said Nicholas Evans, assistant professor of philosophy at UMass Lowell and principle investigator on the grant, according to a news report from UMass Lowell. "People are much less likely to buy self-driving vehicles if they think theirs might kill them on purpose and be programmed to do so."

Dubbed "Ethical Algorithms in Autonomous Vehicles," the project has two main goals:

  • The development of ethical algorithms for use in self-driving cars, based upon existing literature around decision-making in autonomous vehicles; and
  • Development of a model of the projected health outcomes resulting from the implementation of their algorithms.

"Both aims support training of scholars and practitioners sensitive to the ethical implications of autonomous vehicles," according to the project's abstract. "These pedagogical aspects have been designed to promote diverse interactions between STEM students and practitioners, and they will serve to improve STEM education and educator development."

Other issues the team plans to explore include the role of insurance companies in the design of algorithms for autonomous vehicles; what proportion of vehicles on the road need to be autonomous to reduce accidents; and potential cybersecurity issues.

About the Author

Joshua Bolkan is contributing editor for Campus Technology, THE Journal and STEAM Universe. He can be reached at [email protected].

Featured

  • silhouetted human figures stand opposite a glowing digital brain, surrounded by abstract circuits and shadowy shapes

    Global Tech Execs Expect Advancements in AI to Increase Security Threats

    Forty-one percent of global tech executives in a recent NetApp survey said they believe advancements in AI will significantly increase security threats. The firm's second annual Data Complexity Report points to 2025 as "AI's make or break year."

  • network of transparent cloud icons, each containing a security symbol like a lock or shield

    Okta, OpenID Foundation Propose New Identity Security Standard

    Okta and the OpenID Foundation have announced the formation of the IPSIE Working Group — with the acronym standing for Interoperability Profiling for Secure Identity in the Enterprise — dedicated to a new identity security standard for Software-as-a-Service (SaaS) applications.

  • Two figures, one male and one female, stand beside a transparent digital interface displaying AI symbols like neural networks, code, and a shield, against a clean blue gradient background.

    Report Makes Business Case for Responsible AI

    A new report commissioned by Microsoft and published last month by research firm IDC notes that 91% of organizations use AI tech and expect more than a 24% improvement in customer experience, business resilience, sustainability, and operational efficiency due to AI in 2024.

  • man with clipboard using an instrument to take a measurement of a cloud

    Internet2 Kicks Off 2025 with a Major Cloud Scorecard Update

    The latest release on Internet2's Cloud Scorecard Finder website previews new features that include dynamic selection criteria and options to explore multiple solutions side-by-side. More updates are planned in the new year.