Durbin Draws NTIA Attention to Section 230 in AI Comments
Lawmakers must hold companies liable when their artificial intelligence systems cause harm and should consider major updates to Section 230, Senate Judiciary Committee Chairman Dick Durbin, D-Ill., told NTIA in comments released last week.
NTIA publicly released nearly 1,500 comments on its query into how policies should be designed to ensure AI can be trusted (see 2304120036). Durbin and several others discussed why companies using AI shouldn’t be able to use Communications Decency Act Section 230 as a universal defense against liability.
Despite the societal harms from social media the U.S. surgeon general and others have identified, tech companies have “largely escaped accountability because of Section 230,” Durbin said. Enforcers at DOJ, the FTC, the Consumer Financial Protection Bureau and the Equal Employment Opportunity Commission already “have made clear that civil rights, nondiscrimination, fair competition, consumer protection, and other existing legal protections apply to AI,” he said. It’s a good first step, but legislators and regulators need to explore updates to the law to “ensure the mere adoption of automated AI systems does not allow users to skirt otherwise applicable laws. ... We also should identify gaps in the law where it may be appropriate to impose strict liability for harms caused by AI, similar to the approach used in traditional products liability.”
Section 230 was again a topic during the Senate Judiciary Committee’s markup Thursday (see 2306150053, in which ranking member Lindsey Graham, R-S.C., and other members raised the prospect of eliminating the tech industry’s liability shield. Durbin said he looks forward to holding a hearing on the 1996 statute.
“It may not be realistic, but if you’re asking me if [Section 230] should be repealed, yes,” Sen. Chuck Grassley, R-Iowa, told us Thursday. Repealing the statute isn’t a “serious” option, Sen. Brian Schatz, D-Hawaii, told us. “It couldn’t pass, and the internet would collapse, which I’m sure some people wouldn’t be terribly sad about. But it’s not a practical proposal. Not realistic politically, and it’s also bad policy. Section 230 absolutely needs to be reformed, and there are instances in which these platforms abuse the rights conferred to them by the law, but eliminating it would be explosively bad.”
Sen. Josh Hawley, R-Mo., told us it’s only “unrealistic if senators refuse to stand up to Big Tech,” saying “Google and Big Tech pay so much money to people in this Capitol.” He denied eliminating Section 230 would result in “chaos,” saying, “There's a whole background body of law about publisher liability, about distributor liability, and courts would immediately start applying that law. It’s not as if we would just be like, ‘Oh we have no idea what’s happening.’ There’s a huge body of law that would immediately kick in.” If companies know or should know they’re carrying illegal content, they should be held liable for distribution, he said.
“I support getting rid of 230,” said Sen. Rick Scott, R-Fla. “Everything is realistic. It just takes time sometimes. You’ve got to understand a lot of these social media sites have decided to be publishers. They’re not aggregators of information.” Sen. John Cornyn, R-Texas, told us Congress has been talking about a repeal for years: “I hope we’ll revisit it because I don’t think the Supreme Court’s going to save us, but it’s complicated.” He stopped short of saying he would support an outright repeal.
There should be some legal accountability for harms caused by generative AI, the Anti-Defamation League told NTIA. Section 230 insulated companies from liability even when “their own tools amplify hate and harassment,” said ADL. The organization “maintains that while tech companies should not necessarily be accountable for user-generated hate content, they should not be granted automatic immunity for their own behavior that results in legally actionable harm.”
Section 230 can’t be used as an “absolute” defense against promotion of illegal activity in the virtual world, Senate Intelligence Committee Chairman Mark Warner, D-Va., told us: “Even some of the biggest advocates of Section 230 realize it should not apply in the AI world on these large language models.”