Communications Litigation Today was a Warren News publication.

AI Safety Institute 'Cultivating Environment of Safety'

The AI Safety Institute (AISI) plans testing frontier AI models prior to deployment, Director Elizabeth Kelly said in an interview at the Center for Strategic and International Studies (CSIS) Wednesday (see 2402070069). “We’re in a good position to begin that testing in the months ahead because of the commitments we’ve gotten from the leading companies," Kelly told the CSIS Wadhwani Center for AI and Advanced Technologies. When it comes to developing safety standards for AI, the institute will rely on companies showing “what’s under the hood” in their next-generation work, she said. However, because it's not a regulatory body, the institute can only encourage that companies make such information available. Apple, Amazon, Google, Meta, Microsoft, OpenAI, Adobe, IBM, Nvidia and several other companies have agreed to voluntary testing (see 2407260027). AI safety regulation is under the Commerce Department’s Bureau of Industry and Security and reporting rules “have not been finalized," so questions remain, Kelly said. The Commerce Department’s website said BIS “will invoke the Defense Production Act to institute measures to enhance safety as next-generation frontier AI models are developed, including measures requiring developers to report the steps they are taking to test their models and protect them from theft." Kelly also spoke about the importance of international collaboration for developing safety standards for frontier AI through the International Network of AI Safety Institutes. International AI safety groups and other stakeholders plan on meeting in November in the San Francisco area, she said.