AI Reporting Rule Should Be Tied to Performance, Not Computing Power, Think Tank Says
The Bureau of Industry and Security’s proposed reporting rule for AI developers should shift from a computer-based threshold to a performance-based threshold to provide a better measure of risk, the Information Technology and Innovation Foundation’s Center for Data Innovation told BIS this month.
“The proposed rule focuses on computing power as a trigger for reporting, but that is a poor measure of AI model risk,” the center wrote in a letter. “Some high-compute models could pose minimal threats, while lower-compute models with superior performance could present far greater risks. A shift to performance-based thresholds would provide a more accurate assessment of capability and risks, better identifying the most advanced AI models. These thresholds should be dynamic, evolving as AI capabilities and risks change, rather than relying on static compute-based measures."
The center also says that compliance with the rule could be difficult for open-source projects because it’s often unclear who owns or possesses the model. It recommends adjusting reporting for open-source models to “avoid disadvantaging and stifling open-source AI.”
The rule, which BIS unveiled last month, would require developers of advanced AI models and computing clusters to submit information about their activities to the agency (see 2409090012). If done properly, the rule could help the U.S. government understand the capabilities and vulnerabilities of advanced AI systems, the center said.
Some experts believe the rule could lead to AI export controls (see 2410250035).