• News

April 28, 2022

Center for Advancing Safety of Machine Intelligence (CASMI) Panelists Point to Human Complexity to Sort Out AI Ethics Issues


Remote video URL

 

The “low-hanging fruit” of beneficial artificial intelligence systems should be rapidly developed but closely monitored and tested once in use, experts said during a recent panel discussion sponsored by the Center for Advancing Safety of Machine Intelligence (CASMI).

“Find solutions that you can build that will help people as quickly as possible and work from there,” urged Brent Hecht, a computer science associate professor at Northwestern University and director of applied science in Microsoft’s Experiences and Devices division. “There’s nothing wrong with a highly iterative low-hanging fruit-based strategy at this stage of the game.”

Hecht spoke during a virtual panel titled “Ethics, Safety, and AI: Can we have it all?” that also featured Yejin Choi, a professor at the University of Washington’s school of computer science and engineering and senior research manager at the Allen Institute for Artificial Intelligence; Cara LaPointe, co-director of the Johns Hopkins Institute for Assured Autonomy; and panel moderator David Danks, a professor of data science and philosophy at the University of California, San Diego. The panel was part of an April 8 ribbon-cutting for CASMI, an artificial intelligence (AI) research hub formed in a partnership between the Digital Safety Research Institute at UL Research Institutes and Northwestern University.

The panel, which drew about 145 attendees from around the world, focused on an array of ethical issues raised by the explosive growth of AI systems and their potential impact on human health and safety. Each panelist highlighted the importance of interdisciplinary research and a pluralistic approach to AI development to reflect the complexity of humanity and its intersection with technology.

LaPointe also advocated pursuing rapid solutions as a means of ensuring AI development responds to that complexity.

“You go after the low-hanging fruit because it’s not just about learning about the problem, it’s also about training up people and building that expertise so that you have kind of a broad group of folks who can engage,” LaPointe said.

Real-life outcomes also must be measured and considered as development occurs, Choi said. Saying that making AI both ethics- and safety-aware would require expertise in philosophy, law, and other disciplines, Choi cautioned that laboratory tests do not reflect real-life scenarios that are more adversarial and diverse than testing assumes.

“I think we somehow need to reduce the gap between the laboratory setting and what happens when we deploy something,” she said. “It might be that there needs to be more public-facing demo systems that can be stress-tested by the public at large before formal deployment, because I don’t know how else we can really test some of these problems.”