DOJ and the FTC on Thursday will co-host the first public meeting for President Joe Biden’s Strike Force, a multiagency effort to crack down on unfair and illegal pricing. Launched in March, the Strike Force includes the FTC, DOJ and the FCC. The agencies are focused on issues like high internet costs, junk fees and competition issues. FTC Chair Lina Khan and DOJ Antitrust Division Chief Jonathan Kanter are scheduled to speak at the meeting.
Bing will use AI technology to aggregate online information and generate machine learning-driven summaries for search queries, Microsoft announced Wednesday. Google currently uses machine learning tools to generate summaries for search queries. Bing’s generative search tool will be available for a “small percentage of user queries” initially, Microsoft said. The new tool “understands the search query, reviews millions of sources of information, dynamically matches content, and generates search results in a new AI-generated layout to fulfill the intent of the user’s query more effectively.”
Allowing Mississippi to enforce its new age-verification law would cause irreparable harm in violation of the First Amendment, a federal judge ruled Monday in a victory for NetChoice (see 2407030076) (docket 1:24-cv-170-HSO-BWR). The tech association is suing to block HB-1126, which requires that platforms obtain parental consent for social media users younger than 18. The U.S. District Court for Southern Mississippi on July 1 granted NetChoice’s request for a preliminary injunction against HB-1126, finding the association is likely to succeed on the merits of its First Amendment challenge. District Judge Halil Suleyman Ozerden on Monday denied a request from Mississippi Attorney General Lynn Fitch (R) to stay the preliminary injunction. Ozerden cited previous findings stating that the plaintiff’s “loss of First Amendment freedoms, for even minimal periods of time, unquestionably constitutes irreparable injury. ... For the same reasons it granted preliminary injunctive relief, the Court finds that the Attorney General is not likely to succeed on the merits of the appeal.”
X is violating the Digital Services Act (DSA) in areas linked to dark patterns, advertising transparency and data access for researchers, the European Commission said Friday. These are the first preliminary findings issued under the DSA. They follow a separate pending investigation launched in December on different issues, EC officials said at a briefing. X didn't immediately comment. Officials voiced concerns about three aspects of X's setup. One is the interface for "verified" accounts with a "blue checkmark." The EC believes the checkmarks mislead users into thinking accounts and content they're seeing are trustworthy and reliable. But when EC researchers looked at reply feeds on particular posts, they found that X prioritizes content from blue checkmark accounts. This breaches DSA rules against dark patterns - defined as interfaces and user experiences on social media platforms that cause users to make unintended, unwilling and potentially harmful decisions about processing of their personal data -- because anyone can obtain such "verified" status simply by paying for it. That prevents users from making informed decisions about the authenticity of the accounts and content they're seeing. The EC's second "grievance" arises from X's failure to maintain a searchable, publicly available advertisement repository that would allow researchers to inspect and supervise tweets to track emerging risks from online ads, officials said. X formerly gave researchers such access, but Elon Musk rescinded it. The repository is a key obligation under the DSA because it allows anyone to search for an ad on the platform to find out who placed it and what its targeting criteria are, officials said. The third item concerns the lack of a process for giving researchers access to X's public data for scraping, and its procedure for allowing qualified researchers to access its application programming interfaces is too slow and complex, officials said. This falls well below DSA requirements that third parties be able to inspect what's happening on the platform, they said. If the findings are confirmed, X could be fined up to 6% of its total worldwide annual revenue and ordered to remedy the breaches, the EC added. The DSA designated X a very large online platform in April 2023 after it declared it reached more than 45 million monthly active users in the EU, the EC noted.
The Cybersecurity and Infrastructure Security Agency could serve as a one-stop “clearinghouse” for industry stakeholders to report cyber incidents, Paul Eisler, USTelecom's vice president-cybersecurity and innovation, said Thursday. Eisler discussed CISA’s proposed cyber incident reporting rules during a USTelecom webinar. He noted the telecom sector reports cyber incidents to a long list of agencies, including the FCC, FTC, DOJ, SEC and state government entities. Having cyber officials fill out “five different” reports for one incident distracts them from fending off future attacks, he said. There needs to be “concrete, tangible” steps to address solutions after an incident, he said. USTelecom, NCTA and Microsoft filed comments in CISA’s latest round of public comments on the proposed regulations (see 2407030059).
The 2024 Paris Olympics will see the largest number of cyber threats, including “the most complex threat landscape, the largest ecosystem of threat actors, and the highest degree of ease for threat actors to execute attacks,” IDC said in a report Thursday. Accordingly, revenue from cybersecurity services in France will grow $94 million in 2024 as a result of the Games, which start July 26, IDC estimates. “Paris 2024 will be the most connected Games ever, including but not limited to back-of-house systems, financial systems, critical national infrastructure, city infrastructure, sport technology, broadcast technology, and merchandising and ticketing,” IDC said: While the risk is highest for venues and other assets used in the Games, “it permeates outward and seemingly unrelated assets can come under attack, including critical national infrastructure and many French businesses.”
NGL Labs is banned from offering its popular anonymous messaging app to children and teens, the FTC announced Tuesday in a $5 million settlement secured with the Los Angeles District Attorney’s Office. NGL Labs and co-founders Raj Vir and Joao Figueiredo unfairly marketed the app to minors by sending fake messages, falsely claimed AI-related protections against cyberbullying, and violated parental consent requirements under the Children’s Online Privacy Protection Act, the agency alleged. The commission voted 5-0 to file the complaint and proposed order, which a federal court must approve. The defendants “sent fake messages that appeared to come from real people and tricked users into signing up for their paid subscription by falsely promising that doing so would reveal the identity of the senders of messages,” the FTC said. NGL “marketed its app to kids and teens despite knowing that it was exposing them to cyberbullying and harassment,” Chair Lina Khan said. Commissioners Andrew Ferguson and Melissa Holyoak issued a concurring statement. They disagreed with language in the complaint suggesting FTC Act Section 5 prohibits marketing all anonymous messaging apps for minors. Anonymous speech is protected under the First Amendment's free speech clause, and online anonymity for children and teens has benefits, they said. Those benefits include access to mental health resources and insulation from the “maw of cancel culture.” Figueiredo said in a statement the company has cooperated with the agency for two years and considers the settlement an “opportunity to make NGL better than ever for our users.” While NGL believes “many of the allegations around the youth of our user base are factually incorrect, we anticipate that the agreed upon age-gating and other procedures will now provide direction for others in our space, and hopefully improve policies generally.”
Regulators should establish an AI safety model with a supervised process for developing standards and a market that rewards companies exceeding those standards, former FCC Chairman Tom Wheeler said Monday in a Brookings Institution column. This supervised process should convene “affected companies and civil society to develop standards,” he wrote in a column with telecom and tech policy analyst Blair Levin. “Just as the standard for mobile phones has been agile enough to evolve from 1G through 5G as technology has evolved, so can a standard for AI behavior evolve as the technology evolves,” they wrote. Wheeler and Levin recommended ongoing oversight, which would require transparency and collaboration between public and private partners.
The U.S. Supreme Court on Tuesday agreed to hear a case on the constitutionality of a Texas age-verification law. The high court granted certiorari in Free Speech Coalition v. Paxton (docket 23A925). Introduced by Rep. Matthew Shaheen (R) in 2023, Texas’ HB-1181 requires websites publishing a certain amount of “sexual material harmful for minors” to verify the age of every site visitor. The law applies to sites if one-third or more of their content is in this category. Sites face up to $3 million in penalties for violations. Similar laws are set to take effect or have taken effect in Arkansas, Louisiana, Mississippi, Montana, North Carolina, Utah and Virginia. The Free Speech Coalition, a pornography industry trade association, sued to block the law on First Amendment grounds. The U.S. District Court in Austin agreed in August to block the law, one day before its Sept. 1 effective date. U.S. District Court Judge David Ezra found the law likely violates the First Amendment rights of adults trying to access constitutionally protected speech. The Fifth U.S. Circuit Court of Appeals partially vacated the injunction, finding the age-verification requirements to be constitutional. The provision “is rationally related to the government’s legitimate interest in preventing minors’ access to pornography,” the court said.
California’s Senate Judiciary Committee on Tuesday passed legislation that would set stricter limits on sharing children’s personal data. The committee unanimously approved the California Children’s Data Privacy Act (AB-1949). Assemblymember Buffy Wicks (D) introduced the bill that would ban websites and platforms from collecting and sharing personal data of users younger than 18 without their informed consent. For users younger than 13, companies would need to obtain parental consent. AB-1949 would amend the California Consumer Privacy Act. The Computer & Communications Industry Association, TechNet and CalChamber voiced opposition to the bill Tuesday. Common Sense Media and California Attorney General Rob Bonta (D) support the measure. Wicks noted that Bonta’s office investigated Meta and alleged the company knows children younger than 13 access its platforms and that their data is collected in violation of federal law. Bonta’s probe showed about 30% of residents between ages 10 and 12 access Meta platforms. On Tuesday, Bonta's office called AB-1949 a “simple, graceful” solution strengthening children’s privacy protections. In addition, the committee passed AB-2839, a deepfake-related measure that would prohibit people and companies from sharing election campaign content when it contains “materially deceptive and digitally altered or digitally created images or audio or video files” with the intent of influencing elections. Entertainment companies voiced opposition to the legislation Tuesday. Moreover, the Motion Picture Association said it wants sponsors to exempt streaming services from the bill. Warner Bros. Discovery, Sony and NBCUniversal said they are aligned with MPA in opposition. Assemblymember Gail Pellerin (D), the bill's author, said getting accurate information to state voters is “crucial to a functioning democracy.”