CTA VP: AI Is Regulatory, Competitive Opportunity for US
The U.S. has an opportunity to set AI rules that give American companies a competitive advantage over EU counterparts facing an overly prescriptive regulatory scheme, CTA Vice President-Emerging Technology Policy Doug Johnson said in a recent interview.
The European Parliament approved a draft AI Act in June (see 2306140039), drawing backlash from the tech industry, as did the EU’s general data protection regulation when it passed (see 2306270054). The AI Act is potentially emerging as a global standard like the GDPR, but it’s not necessarily responsible regulation for protecting innovation, said Johnson. If the EU isn’t “getting it quite right,” then the U.S. “has an opportunity to set rules that put us in a better position competitively and protect innovation,” he said.
Microsoft President Brad Smith is among several tech executives asking Congress to regulate AI. He spoke recently in Brussels, where he said the AI Act should be considered for creating a joint, global approach to AI regulation. There’s an opportunity for the EU, the U.S. and other G7 members, plus India and Indonesia, to move forward with shared values and principles, he said. Indian Prime Minister Narendra Modi met with President Joe Biden, senior officials and tech executives at the White House the day before Smith’s speech (see 2306230060). OpenAI's Sam Altman, whose company engineered ChatGPT, a major catalyst for AI regulatory discussions, attended. Smith said there has been a “rollercoaster” discussion since ChatGPT was released in November 2022.
Johnson noted there was discussion in Brussels about establishing a new AI authority and talk in the U.S. about a new government agency. “We have existing agencies already, and that's important to recognize,” he said: Agencies can contemplate the role of AI with existing authorities.
The Anti-Defamation League is glad the EU acted “swiftly” and wants the U.S. to “move in the same direction,” said Policy and Impact Director Lauren Krapf: There are major concerns about the influence of AI systems on hate speech, harassment and extremism online. Automated decision tools, algorithmic recommendations and increased popularity of generative AI are fueling questions about accountability for AI service providers, she said. She credited California for enacting AB-587, requiring social media platforms to publicly disclose policies and enforcement for online hate, racism, disinformation, extremism, harassment and foreign political interference. Regulations like AB-587 will help shed light on the inner workings of AI tools, she said. Whether or not the U.S. forms a new federal agency, it’s clear Congress needs to increase resources and authorities to account for the explosion in popularity of AI, she said.
It’s not necessary to establish a new federal agency, said Jessica Melugin, director-Competitive Enterprise Institute's Center for Technology and Innovation: There are a “hundred regulatory government agencies, and I think a lot of them have the authority to deal with some of the predictive problems.”
ADL is glad Congress is taking the issue seriously, said Krapf, noting the Senate briefings led by Senate Majority Leader Chuck Schumer, D-N.Y. (see 2306210065). Melugin said it’s important for Congress to take a pro-innovation approach to regulating AI: Policymakers should be responding to “actual problems,” not preempting technological advances through a heavy-handed regulatory scheme in response to “theoretical problems.”
The White House had a listening session Friday with labor leaders to discuss AI implications for workers, unions and employment. Senior officials met with representatives from the Communications Workers of America, AFL-CIO, Screen Actors Guild-American Federation of Television and Radio Artists, American Federation of Teachers and others. Administration attendees included Office of Science and Technology Policy Director Arati Prabhakar and Principal Deputy U.S. Chief Technology Officer Deirdre Mulligan.