The district court properly dismissed with prejudice five counts of Hadona Diep and Ryumei Nagao's complaint against Apple for injuries caused by a malicious app called Toast Plus that they downloaded from the App Store, said a 9th U.S. Circuit Appeals Court panel memorandum Wednesday (docket 22-16514). That's because those counts were barred by Section 230 of the Communications Decency Act, said the memorandum. But the U.S. District Court for Northern California erred by dismissing the plaintiffs’ three consumer protection claims with prejudice and without leave to amend, said the memorandum. Because the plaintiffs could conceivably cure the “pleading deficiencies” in the consumer protection claims, they “should have been afforded the opportunity to amend their complaint,” it said. And because the district court’s denial of leave to amend those claims “was premised on legal error,” the panel vacated the district court's judgment as to those claims, and remanded with instructions to grant the plaintiffs “leave to amend their complaint as to those claims,” it said. Circuit Judges Sidney Thomas and Morgan Christen and 7th Circuit Judge David Hamilton, by designation, sat on the panel.
Amazon is responsible for the decimation of legitimate companies like Planet Green that supply genuinely remanufactured and recycled printer ink cartridges, said Planet Green’s opening brief Friday (docket 23-4434) in the 9th U.S. Circuit Court of Appeals. It's seeking to reverse the district court’s dismissal of its fraud complaint against Amazon on grounds that Section 230 of the Communications Decency Act shields Amazon from liability (see 2312290030). Amazon is the dominant source of foreign-made “clone” printer ink cartridges -- newly manufactured products that are misrepresented to consumers as remanufactured and recycled, when they aren’t, said Planet Green. Amazon imports the falsely labeled clone cartridges from overseas, stores them in its warehouses and distributes them to consumers throughout the U.S., said the brief. It takes "title" to them and “itself sells them directly to consumers in packaging and bearing labels that falsely identifies the clone cartridges as remanufactured or recycled,” it said. Amazon also promotes them through its own statements over the Amazon website, via email and on third-party internet platforms, it said. It also “participates extensively” in the promotion and sale of the clone cartridges by third-party sellers on its website, and “profits handsomely from those sales,” it said. On Section 230, the district court wrongly found that Amazon was entitled to “complete immunity” from Planet Green’s action, “even though its claims arise in significant part from statements, sales, and conduct by Amazon itself that do not constitute the publication of third-party statements over Amazon’s website,” it said. The court also held that Amazon couldn’t be held liable for false and misleading product listings, it said. The district court ultimately gave Amazon a “get-out-of-jail-free card” that would allow it to disregard “any legal obligation to avoid deceiving consumers about printer ink cartridges,” it said. That result “twists” Section 230, which is a statute “focused on limiting liability for the publication of third party statements on the internet, beyond recognition,” it said: “It must be reversed.”
The X platform thinks the district court “improperly applied” the U.S. Supreme Court’s 1985 decision in Zauderer v. Office of Disciplinary Counsel of the Supreme Court of Ohio when it denied X’s motion for a preliminary injunction to block California from enforcing the state’s social media transparency law (AB-587) that took effect Jan. 1 (see 2401020002), said X’s mediation questionnaire Friday (docket 24-271) at the 9th U.S. Circuit Court of Appeals. Zauderer widened protection for commercial speech by striking down most of Ohio’s restrictions on advertising by attorneys. But Zauderer doesn’t “apply here” because the compelled speech at issue “is content-based, not commercial, not purely factual, and not uncontroversial,” said the questionnaire. X thinks AB-587 violates the First Amendment because it compels X “to engage in speech against its will,” it said. AB-587 also interferes with X’s “constitutionally protected editorial judgments,” it said. The statute also “has both the purpose and likely effect” of pressuring X to “remove, demonetize, or deprioritize” constitutionally protected speech that the state “deems undesirable or harmful,” it said. Because the California legislature passed AB-587, and because the parties disagree about its constitutional and legal validity, X doesn’t believe “this action is appropriate for mediation,” said the questionnaire. In denying the preliminary injunction motion, the district court held that X “failed to establish a likelihood of success on the merits” of its First Amendment and Section 230 preemption challenges, it said.
U.S. District Judge David Barlow for Utah in Salt Lake City granted in part the parties’ stipulated motion for an amended briefing schedule in NetChoice’s Dec. 20 motion for a preliminary injunction to block Utah Attorney General Sean Reyes (R) from enforcing the Social Media Regulation Act when it takes effect March 1 (see 2312230004), said the judge’s text-only docket order Tuesday (docket 2:23-cv-00911). Reyes’ deadline for filing an opposition to the injunction motion is Jan. 23, and NetChoice’s reply brief is due Feb. 6, said Barlow’s order. The parties were seeking deadlines of Jan. 31 and Feb. 12, respectively. Reyes’ response to the NetChoice complaint is due 21 days after Barlow rules on the motion for an injunction, as the parties requested, said the order. Due to the “issues being litigated," more extended deadlines "are unlikely to be workable" in light of the statute's fast-approaching March 1 effective date, said the order. NetChoice contends that the Utah statute is unconstitutional because it uses content-, viewpoint- and speaker-based definitions to restrict minors’ and adults’ ability to access and engage in protected speech. NetChoice also contends that the statute uses those definitions to restrict how certain websites organize, display and disseminate protected speech. NetChoice argues that the entire statute violates the First Amendment and the due process clause, and that Section 230 of the Communications Decency Act preempts parts of it.
U.S. District Judge Otis Wright for Central California in Los Angeles granted Grindr's motion to dismiss with prejudice and without leave to amend plaintiff John Doe’s child sex trafficking complaint against the operator of the dating app for LGBTQ+ people, said the judge's signed Dec. 28 order (docket 2:23-cv-02093). “The facts of this case are indisputably alarming and tragic,” it said. “No one should endure” what plaintiff Doe has. But “after careful review and consideration of the facts and applicable law,” the court "ultimately determines" that Doe’s claims are "precluded" by Section 230 of the Communications Decency Act, it said. In spring 2019, Doe was 15 and lived in a small town in Nova Scotia, where he “knew he was gay but was too ashamed to tell his parents,” said the order. “Seeking queer community,” Doe installed the Grindr app, misrepresented that he was older than 18 and created a user profile, it said. Grindr didn’t verify Doe’s age, it said. Over a four-day period, the app matched Doe with four “geographically proximate adult men,” it said. Doe and the men exchanged direct messages, personal information and sexually explicit photos, it added. Doe met each man and was sexually assaulted and raped, it said. After Doe’s mother confronted him, Doe told her he was on Grindr, that the app matched him with adult men and that they had raped him, it said. Three of the men are in prison for sex crimes, while the fourth remains at large, it said.
Plaintiff-appellant Planet Green Cartridges faces a Tuesday deadline for filing its mediation questionnaire in the 9th U.S. Circuit Appeals Court appeal that seeks to reverse the district court’s Oct. 5 dismissal of its false advertising claims against Amazon, according to a time schedule order Thursday (docket 23-4434). Planet Green’s opening brief is due Feb. 6, and Amazon’s answering brief is due March 6, said the order. Planet Green alleges Amazon failed to deactivate accounts of third-party sellers that falsely advertised multiple brands of cheap Chinese-made “clone” ink cartridges to consumers as legitimate recycled OEM cartridges, with none of the quality or green benefits of the authentic recycled product. Planet Green describes itself as one of the last U.S.-based recycled cartridge manufacturers. It says Amazon's permissiveness of the falsely advertised product is hurting its business. The district court’s dismissal in Amazon's favor found that Section 230 of the Communications Decency Act provided Amazon immunity for all of Planet Green’s claims. The court also held that Planet Green failed to identify any false statement of fact Amazon had made, and that the negligence claim failed to allege a legal duty that Amazon owed to Planet Green.
Future Section 230 cases before the 5th U.S. Circuit Court of Appeals face "an extreme risk of judicial activism to overturn the existing 5th Circuit precedent and disrupt decades of Section 230 jurisprudence," Santa Clara University law professor Eric Goldman blogged Tuesday. Pointing to the dissent issued Monday in a 5th Circuit denial of an en banc rehearing motion involving a lawsuit against messaging app Snapchat (see 2312180055), Goldman said the seven dissenting judges' goal "seems to be to urge the Supreme Court to take this case." Written by Circuit Judge Jennifer Walker Elrod, the dissent criticized the 5th Circuit's previous "atextual interpretation" of Section 230, "leaving in place sweeping immunity for social media companies that the text cannot possibly bear." "Declining to reconsider this atextual immunity was a mistake," Elrod wrote.
The 5th U.S. Circuit Court of Appeals, by an 8-7 vote, denied plaintiff-appellant John Doe’s motion for rehearing en banc of the court’s affirmation of the district court’s denial of his Section 230 claims against Snap, said its order Monday (docket 22-20543). The case involved Doe, who was sexually assaulted by his high school teacher when he was 15 years old. His teacher, Bonnie Guess-Mazock, who pleaded guilty to sexual assault, used Snapchat to send him sexually explicit material. Doe sought to hold Snap accountable for its alleged encouragement of that abuse. Bound by the 5th Circuit’s “atextual interpretation” of Section 230, the U.S. District Court for Southern Texas and a panel of this court “rejected his claims at the motion to dismiss stage,” wrote the seven dissenting judges. The en banc court, by a margin of one, voted against revisiting our “erroneous interpretation” of Section 230, “leaving in place sweeping immunity for social media companies that the text cannot possibly bear,” they said. That “expansive immunity” is the result of adopting the too-common practice of reading extra immunity into statues where it doesn’t belong, and relying on policy and purpose arguments to grant sweeping protection to internet platforms, they said. “Declining to reconsider this atextual immunity was a mistake,” they said. Section 230 “closes off one avenue of liability” for social media companies by preventing courts from treating platforms as the “publishers or speakers” of third-party content, they said. “In fact, Section 502 of the Communications Decency Act expressly authorizes distributor liability for knowingly displaying obscene material to minors,” they said. “It strains credulity to imagine that Congress would simultaneously impose distributor liability on platforms in one context, and in the same statute immunize them from that very liability,” they said. “That our interpretation of Section 230 is unmoored from the text is reason enough to reconsider it,” said the dissenting judges. “But it is unmoored also from the background legal principles against which it was enacted,” they said. Doe urged the 5th Circuit “to treat Snap as a distributor and not as a publisher,” they said. Doe states, correctly, that Section 230 was enacted to provide immunity for creators and publishers of information, not distributors, they said. The Communications Decency Act itself authorizes liability for platforms as distributors, they said. But “our overbroad reading of Section 230 renders Doe’s claim dead in the water,” they said.
The court should dismiss a privacy complaint against X, formerly Twitter, because the plaintiff has no viable claim the social media platform violated the Illinois Biometric Information Privacy Act (BIPA), said X's Nov. 20 reply (docket 1:23-cv-05449 ) in support of its motion to dismiss. Plaintiff Mark Martell’s August complaint alleged X implemented software to police pornographic and other “not-safe-for-work” images uploaded to Twitter without adequately informing individuals who interacted with the platform “that it collects and/or stores their biometric identifiers in every photograph containing a face that is uploaded to Twitter" (see 2308160021). In his opposition to X’s motion to dismiss, Martell failed to explain how PhotoDNA collects scans of facial geometry when the PhotoDNA webpage “directly refutes that allegation,” said the reply. Martell concedes that PhotoDNA doesn’t enable X to identify individuals depicted in images uploaded to Twitter, but he argued that biometric identifiers don’t have to be capable of identifying an individual, which X called an “oxymoron.” A biometric identifier must be capable of identifying an individual by definition, X said. Because BIPA applies only to data that can be used to identify an individual, and because Martell “fails to allege that PhotoDNA enables X to identify anyone,” his claims should be dismissed, it said. The plaintiff also fails to explain why Section 230(c)(2)(A) of the Communications Decency Act doesn’t immunize X from BIPA liability, said the reply. Though he said he didn’t seek to post obscene images on X and is not suing X for its actions as a publisher or speaker, “both arguments are irrelevant,” said X's reply, saying all that matters under the CDA is whether X’s alleged conduct is an “action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” That is what X does with PhotoDNA: It uses it to “identify and remove known images of child exploitation,” the reply said. If PhotoDNA can’t be used to identify individuals depicted in images, “how can X possibly seek those individuals’ consent” under BIPA, it said. If the court determines that Martell’s understanding of BIPA is correct, and that Section 230 doesn’t immunize X from liability, then X will have an “impossible choice”: “Subject itself to recurring and outsized liability under BIPA by continuing to use PhotoDNA to remove child-exploitation images, or halt that use and allow child-exploitation images to circulate on its platform,” it said.
Few courts have explored Section 230’s application to websites’ “algorithmic recommendations” in depth, reported the Congressional Research Service Thursday. But Congress “may consider” whether the broad Section 230 immunity currently recognized by courts “should apply to algorithmically sorted content or, alternatively, whether certain behavior or content should warrant different treatment under Section 230,” it said. Some members of the 117th Congress introduced several bills that would have addressed Section 230’s relationship with algorithmically sorted or recommended content, it said. Those bills “generally would have restricted the availability of Section 230’s protections” for platforms that recommend or promote certain content, it said. One of these bills, the Discourse Act, was reintroduced in the 118th Congress as S-921, it said. The free speech clause of the First Amendment “limits the government's ability to regulate speech,” said the report. Proposals that make Section 230’s protections unavailable for certain algorithmic operations “raise at least three questions,” it said. One is whether, if Section 230 is unavailable, hosting or promoting others’ speech on the internet “is itself protected under the First Amendment,” it said. If it is, the First Amendment “might restrict liability,” it said. A second question is whether modifying an existing liability regime “raises the same First Amendment concerns as enacting a law that directly prohibits or restricts speech,” said the report. A third question is, if such a proposal does raise First Amendment concerns, whether withholding Section 230’s protections for certain algorithmic operations affects speech based on its content, it said.