Communications Litigation Today was a Warren News publication.
‘Little Bit of Humility’

Google, CTA, US Chamber: Evidence Lacking on AI Risks

There’s a lot of speculation and little evidence about the risks associated with generative AI, so lawmakers and enforcers should show “humility” when regulating the technology, executives from Google, CTA and the U.S. Chamber of Commerce said Monday.

They appeared during an FCBA event with officials from NTIA and DOJ, who said agency officials are monitoring states' passage of AI-related bills. The federal government has yet to make a comprehensive proposal on AI regulation.

There’s a lot of speculation when it comes to the risks associated with larger AI language models, Alice Friend, Google global head-AI and emerging tech policy, said. “We are thinking through all these possibilities before they’ve been realized,” she added. “That is healthy," but we need "some healthy humility about what kinds of thresholds and rule sets are possible to invent right now because we can’t base them on empirical evidence.”

Everything Friend said is “absolutely right,” said Michael Richards, the Chamber's Technology Engagement Center director. Industry continues working on best and voluntary practices because there’s “no definitive science” surrounding the technology, he said. It’s important to have a “little bit of humility as we’re working through this.”

States have proposed more than 600 AI-related bills in the first six months of the year, according to FCBA. Ben Winters, an attorney adviser in DOJ’s civil rights division, said his office has spoken with civil rights and human rights offices at the state and local level. DOJ wants to communicate what’s happening at the federal level and how enforcers can approach the issue at the state and local level, he said. He likened the states' AI proposals to a “fire hose.” They help agencies see a “range of ideas” and determine what’s politically feasible and possible to implement, he said. NTIA’s forthcoming report on AI regulation will be helpful on that front, he said.

NTIA is working through several AI-related inquiries, including a study of the benefits and risks of open and closed AI models (see 2402210041, 2403270067 and 2404010067). Travis Hall, NTIA’s acting associate administrator-Office of Policy Analysis and Development, said the agency is “absolutely tracking” the states with “keen interest,” but NTIA is generally focused on the federal level, where it has more influence.

Hall noted July 26 is a major deadline because it marks 270 days from when President Joe Biden signed his executive order on AI technology (see 2310300056). Federal agencies, including DOJ and NTIA, must meet key objectives at that date, he said.

CTA wants NTIA to be careful about applying broad standards to all AI language models, said Doug Johnson, vice president-emerging technology. CTA tried to get this point across in comments to NTIA on AI models, he said: “There are gradients of openness in these foundation models. Many open foundation models don’t present the kinds of risks that were identified in the executive order. Nor do all the open foundation models offer dual use capabilities.” CTA members have concerns about all open foundation models being treated the same, he said.

This “fractured” state approach to regulating AI technology is causing small companies to spend more and more resources on compliance, said Johnson: That’s “money and time that they aren’t spending innovating, so there’s a penalty on the innovation side.” The U.S. needs to “stand a lighter touch regulatory approach in contrast” to the EU AI Act (see 2405290007), Johnson said.

Friend agreed, noting the risk of creating a “very fragmented regulatory ecosystem” in the U.S., which is hardest on the smallest companies. It’s hard on Google, but it’s “existential” for startups, she said. States haven’t seen the federal government propose anything comprehensive, so they feel a need to fill the gap, she said.