• News

August 18, 2023

DSRI Sponsors Workshop on Measuring Safety in AI


CASMI workshop


As artificial intelligence (AI) rapidly evolves and impacts more aspects of our everyday lives, we are only in the early stages of understanding its ethical implications and measuring how to safely assess and use it. With that in mind, leaders from the Digital Safety Research Institute (DSRI) recently met with experts from academia, industry, and government for a workshop on how to meaningfully operationalize safe, functional AI systems.

Hosted by the Center for Advancing Safety of Machine Intelligence (CASMI) at Northwestern University, the “Sociotechnical Approaches to Measurement and Validation for Safety in AI” workshop included presentations, group discussions, and breakout sessions. Participants emphasized that building an AI safety culture should include input from everyone, including those impacted by AI systems.

CASMI workshop“To help provide safety at the human-digital interface, we desperately need automated measures of digital safety to include cybersecurity, privacy, biases, and human manipulation," said Dr. Jill Crisman, vice president and executive director of DSRI. "The workshop brought together experts in AI research, AI deployment, and AI policy to discuss ways forward to measure the safety of human-AI systems.”  

The Digital Safety Research Institute (DSRI) at UL Research Institutes partners with CASMI to discuss and develop best practices for the evaluation, design, and development of machine intelligence that is safe, equitable, and beneficial.

Visit CASMI’s website to learn more and read an in-depth recap of the workshop:
Read the Article