NIST seeks collaborators for consortium supporting AI safety

The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) is calling for participants in a new consortium supporting development of innovative methods for evaluating artificial intelligence (AI) systems to improve the rapidly growing technology’s safety and trustworthiness. This consortium is a core element of the new NIST-led U.S. AI Safety Institute announced yesterday at the U.K.’s AI Safety Summit 2023, in which U.S. Secretary of Commerce Gina Raimondo participated.  

The institute and its consortium are part of NIST’s response to the recently released Executive Order on Safe, Secure, and Trustworthy Development and Use of AI. The EO tasks NIST with a number of responsibilities, including development of a companion resource to the AI Risk Management Framework (AI RMF) focused on generative AI, guidance on authenticating content created by humans and watermarking AI-generated content, a new initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, and creation of test environments for AI systems. NIST will rely heavily on engagement with industry and relevant stakeholders in carrying out these assignments. The new institute and consortium are central to those efforts.

“The U.S. AI Safety Institute Consortium will enable close collaboration among government agencies, companies and impacted communities to help ensure that AI systems are safe and trustworthy,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio. “Together we can develop ways to test and evaluate AI systems so that we can benefit from AI’s potential while also protecting safety and privacy.”

The U.S. AI Safety Institute will harness work already underway by NIST and others to build the foundation for trustworthy AI systems, supporting use of the AI RMF, which NIST released in January 2023. The framework offers a voluntary resource to help organizations manage the risks of their AI systems and make them more trustworthy and responsible. The institute aims to measurably improve organizations’ ability to evaluate and validate AI systems, as detailed in the AI RMF Roadmap

Read More...