Existing law needs updating to protect artists and individuals from fake AI-generated content, House Intellectual Property Subcommittee Chairman Darrell Issa, R-Calif., said Friday during a hearing in Los Angeles.
Section 230
The Senate Judiciary Committee will seek support from Meta, X, TikTok and Discord for kids’ privacy legislation during Wednesday's hearing when their CEOs are scheduled to appear, Sen. Richard Blumenthal, D-Conn., told reporters Tuesday.
Consumer and industry advocates sounded alarms late last week over a proposed California ballot initiative that would make social media companies liable for up to $1 million in damages for each child their platform injures. Courts would likely find that Common Sense CEO James Steyer’s December proposal violates the First Amendment and Section 230 of the Communications Decency Act, said comments California DOJ forwarded to us Friday. For example, “Initiative 23-0035 is a misguided and unconstitutional proposal that will restrict all Californians’ access to online information,” the Electronic Frontier Foundation (EFF) said.
First Amendment questions are lingering when it comes to censoring online chatbots, even when they encourage users to kill themselves, a tech industry executive told the Senate Homeland Security & Governmental Affairs Committee Wednesday. Sen. Josh Hawley, R-Mo., pressed Information Technology Industry Council General Counsel John Miller about an online user who took his life after interacting with an online chatbot that encouraged him to do so. Hawley argued individuals and their families, including parents of young users, should be able to sue tech companies for such incidents. Miller said companies don’t want chatbots “doing those sorts of things,” but AI is responsible for a lot of good. The technology has been useful in cancer research, for example, he said. Asked if companies should be sued for AI's negative impacts, Miller said, “Under the current law, that’s probably not allowable,” alluding to Communications Decency Act Section 230, which has enabled technological innovation and which the U.S. Supreme Court has upheld, Miller said. Hawley asked Miller if he would support legislation the senator sponsored with Sen. Richard Blumenthal, D-Conn., the No Section 230 Immunity for AI Act. The proposed law would clarify that Section 230 doesn’t apply to claims based on generative AI activity (see 2306150059). Miller said he hasn’t reviewed the bill but argued there are “other equities at play in this discussion,” including the First Amendment. Hawley asked Miller if a chatbot encouraging a teenager to kill himself is First Amendment-protected speech: “Is that your position?” Miller said it’s not his position, but “I don’t think the question’s been resolved.”
Generative AI is expanding Big Tech’s data monopoly and worsening news outlets' financial crisis, Sens. Richard Blumenthal, D-Conn., and Josh Hawley, R-Mo., agreed Wednesday while hearing testimony about The New York Times Co. (NYT) lawsuit against Microsoft and OpenAI.
Future Section 230 cases before the 5th U.S. Circuit Court of Appeals face "an extreme risk of judicial activism to overturn the existing 5th Circuit precedent and disrupt decades of Section 230 jurisprudence," Santa Clara University law professor Eric Goldman blogged Tuesday. Pointing to the dissent issued Monday in a 5th Circuit denial of an en banc rehearing motion involving a lawsuit against messaging app Snapchat (see 2312180055), Goldman said the seven dissenting judges' goal "seems to be to urge the Supreme Court to take this case." Written by Circuit Judge Jennifer Walker Elrod, the dissent criticized the 5th Circuit's previous "atextual interpretation" of Section 230, "leaving in place sweeping immunity for social media companies that the text cannot possibly bear." "Declining to reconsider this atextual immunity was a mistake," Elrod wrote.
Sen. Josh Hawley, R-Mo., may object to the FTC nominee Andrew Ferguson's candidacy, potentially blocking him from expedited confirmation.
NTIA Administrator Alan Davidson announced Wed. the launch of the agency's public consultation process related to its forthcoming report to President Joe Biden on the risks, benefits and regulatory approaches to AI foundation models, as directed in a Biden AI executive order (see 2310300056). Speaking at an event hosted by the Center for Democracy and Technology, Davidson said the report will focus on pragmatic AI policies rooted in technical, economic and legal realities of the technology. The Biden order gave the Commerce Department 270 days to get public input and deliver the AI recommendations. Davidson said.
Removing liability protections for generative AI tools would have significant, negative impacts on online free expression, content moderation and innovation, advocates and industry groups wrote Senate leaders Monday. Sen. Josh Hawley, R-Mo., announced plans to seek unanimous consent through a hotline process for his No Section 230 Immunity for AI Act (see 2306150059). A coalition of groups, including the Computer & Communications Industry Association, American Civil Liberties Union, Americans for Prosperity, Center for Democracy and Technology, Chamber of Progress, Electronic Frontier Foundation, R Street Institute and TechFreedom, sent the opposition letter to Senate Majority Leader Chuck Schumer, D-N.Y., and Minority Leader Mitch McConnell, R-Ky. Generative AI “is a complex issue that deserves careful thought and nuanced, precise legislation -- not a rigid, heavy-handed overreaction that threatens to undermine free speech, user safety, and American competitiveness in the AI marketplace,” they wrote. “We urge Congress to consider a more thoughtful approach.”
Snap CEO Evan Spiegel will appear before the Senate Judiciary Committee, the company said Monday after it announced subpoenas seeking testimony from Snap, X and Discord about children's online safety. The committee also said it’s in discussions about potential voluntary testimony from Meta CEO Mark Zuckerberg and TikTok CEO Shou Zi Chew.