Keeping Pace with Rapid Advances in Generative Artificial Intelligence

The threat of manipulated media — audio, images, and video — continues to grow as automated manipulation technologies become more accessible, and social media continues to provide a ripe environment for viral content sharing.
Recognizing this challenge, the Digital Safety Research Institute (DSRI), a unit of UL Research Institutes, has entered into a cooperative research and development agreement with the Defense Advanced Research Projects Agency (DARPA) to further advance deepfake detection and forensics.
“It is important for the public to know the provenance of content presented in our digital information ecosystem – much in the same way physical libraries cite the provenance of their materials,” said Dr. Jill Crisman, executive director of DSRI of UL Research Institutes. “DSRI aims to enable digital provenance testing tools keep pace with the rapid advances of generative AI.”
Detecting deepfakes requires collaboration
The Defense Advanced Research Projects Agency (DARPA) researches emerging technologies for defense applications. Its collaboration with DSRI builds upon DARPA’s Semantic Forensics (SemaFor) program, which was launched in 2020 to develop cutting-edge forensic tools for identifying and analyzing AI-generated media.
With SemaFor’s conclusion in 2024, DARPA is transitioning its research efforts to DSRI. DSRI will play a pivotal role in sustaining an media forensics research ecosystem. A key component of the ecosystem is sponsoring forensics research challenges, which encourages global participation in open challenges and reporting at of results at academic conferences. By supporting ongoing forensic research, DSRI aims to enhance the tools available for detecting, attributing, and characterizing AI-generated content across multiple media formats — including images, video, and audio.
Fact versus fabrication: Society must learn the difference
By fostering collaboration between government, industry, and academia, this initiative aims to ensure AI forensic tools evolve as quickly generative AI. As AI-generated content becomes more sophisticated, so too must society’s ability to distinguish fact from fabrication.
“Innovation does not occur in a vacuum, so it’s important for us to communicate about the work we’re doing to engage with industry, academia, and potential transition partners to develop the technology for practical applications,” said Wil Corvey, DARPA’s SemaFor program manager. “DSRI’s mission of product testing and evaluation, specifically with respect to the complex and evolving sociotechnical environment in which products will be deployed, makes them an ideal fit for this area of transition.”
Learn more about digital safety research here.
PUBLISHED