Advocates Seek AI Ban for Federal Law Enforcement
Federal law enforcement agencies should be banned from using “racially discriminatory technologies” like facial recognition and predictive policing, consumer advocates wrote DOJ in comments due last week. The Center for American Progress, the Electronic Privacy Information Center, Fight for the Future, Free Press, the Lawyers’ Committee for Civil Rights Under Law and the National Association for the Advancement of Colored People signed joint comments in response to the National Institute of Justice’s query. NIJ is preparing a report on AI use in the criminal justice system, as President Joe Biden’s AI executive order directs. DOJ said it isn’t planning on publishing comments. Research shows facial recognition technology (FRT) and predictive policing tools are “racially discriminatory,” the groups said: Accordingly, authorities should ban these technologies “by default as [they are] presumptively discriminatory and in violation” of the Civil Rights Act. The advocates recommended limited, case-by-case waivers in which police can use the technology when there’s clear evidence it isn’t discriminatory, as well as audits before and after the technology is used. They said algorithmic surveillance tools, including drones and FRT, should be banned in public places or any setting where First Amendment rights could be chilled. Police often use these technologies when targeting racial justice protesters and activists, the advocates said: “Because of the acute threat that these technologies pose to First Amendment rights, their use for public surveillance should be prohibited.” In addition, they recommended law enforcement disclose the use of AI technology to defendants in criminal cases.