Communications Litigation Today was a Warren News publication.
‘Dangerously Abused’

Hawley, Blumenthal Seek Meta Answers; AI Regulation Debated

Meta exposed its artificial intelligence technology to risks of spam, fraud, malware and privacy abuse by allowing unrestrained release of its Large Language Model Meta AI (LLaMA) program, wrote Sens. Josh Hawley, R-Mo., and Richard Blumenthal, D-Conn., Tuesday.

Generative AI tools have been “dangerously abused” through open source models, they wrote. They cited Stability AI’s launch of Stable Diffusion, an open source art generator, which was used to “create violent and sexual images, including pornographic deep fakes of real people.” ChatGPT, OpenAI’s closed model, has been “misused to create malware and phishing campaigns, financial fraud, and obscene content involving children,” they wrote. They asked what Meta is doing to protect the technology from abuse, noting the company originally planned to release the program to approved researchers only. Meta’s “vetting and safeguards appear to have been minimal and the full model appeared online within days,” they wrote. Meta didn’t comment.

Hawley and Blumenthal are exploring ways to regulate AI technology (see 2305160074 and 2305250037). Sen. Elizabeth Warren, D-Mass., and Senate Judiciary Committee ranking member Lindsey Graham, R-S.C., are preparing to file legislation to form a new tech regulator, potentially focused on AI. It’s clear legislators are “chomping at the bit” to pass new AI regulation, and the proposal to create a new regulatory agency is gaining steam, said Center for Data Innovation Director Daniel Castro during a Tuesday webinar.

Current statutes on FTC and DOJ enforcement have gaps that don’t account for AI technology and large language models that might “exceed human intelligence,” said Duke University faculty fellow Lee Tiedrich. University of Maryland computer science professor Ben Schneiderman agreed with American Enterprise Institute nonresident senior fellow Shane Tews that legislators should be looking to regulate the outcomes of the technology, not the internal workings of the models. “It’s not the AI that’s the problem,” he said, noting he’s opposed to a new agency regulating models and algorithms. “It’s the users of AI, and it’s the uses of AI that we want to regulate.” Existing agencies have the ability to apply the current legal framework to AI technology, he said.

It’s “crazy to think” officials at the hundreds of existing federal agencies aren’t paying attention to AI technology, said R Street Institute Senior Fellow Adam Thierer. Every single agency is exploring how AI, machine learning and robotics impact their authority, he said. He noted proceedings at the FTC, FDA, FAA, the Consumers Products Safety Commission, the National Highway Traffic Safety Administration and the Equal Employment Opportunity Commission, and said NTIA and the National Institute of Standards and Technology have done a “good job” leading multi-stakeholder processes to coalesce on AI guidance and best practices for ethical AI use.

Agencies are under-resourced on this issue, since the technology is so new and evolving, said Tiedrich. A good place to start is an agency review of existing activities and where resources are lacking, she said. Tiedrich and Tews noted the issue of content moderation and how automated tools relate to Communications Decency Act Section 230. There are questions about how insulated platforms should be from liability when bad actors use these automated tools to harm users, said Tiedrich. “The tools that they are using in AI are the ones that are creating the malicious information,” said Tews. “So trying to stamp that out is a big challenge.”