New Jersey Attorney General Gurbir Grewal's decision to ban facial recognition app Clearview AI may lead others to take a closer look at the controversial technology. (AP)
The New Jersey attorney general's recent decision to ban law enforcement in the state from using a controversial facial recognition technology should encourage other governments to pump the brakes and take a harder look at police use of such software, some lawyers say.
New Jersey Attorney General Gurbir Grewal told county prosecutors in the state on Jan. 24 to immediately stop using the facial recognition app produced by Clearview AI, a secretive New York-based company that claims on its website to have helped law enforcement "catch the most dangerous criminals, solve the toughest cold cases and make communities safer."
"We are not foreclosing the use of facial recognition technology or of any particular facial recognition service or product in the future," the attorney general's office told Law360 in an email. "However, we need to have a sound understanding of the practices of any company whose technology we use, as well as any privacy issues associated with their technology."
That the attorney general "put a moratorium on Clearview AI’s chilling, unregulated facial recognition software" is "unequivocally good news," the American Civil Liberties Union of New Jersey said on Twitter after the ban.
Law enforcement's use of any facial recognition technology to track down suspects is controversial, in part, because of its tendency to misidentify women, nonbinary people and people of color, according to the ACLU-NJ.
But while law enforcement agencies around the country do use facial recognition technologies produced by various companies, the ban on Clearview is due in part to the controversial way in which the company has assembled its massive database, leading to "serious questions about the company's practices regarding data privacy and cybersecurity," the New Jersey attorney general said. Unlike other companies, Clearview claims to have compiled a database of 3 billion "publicly available" images from all over the internet, all of which police can compare to uploaded images of suspects.
"Clearview is just the tip of the iceberg — a window into what the future looks like if lawmakers don't draw a line in the sand now by banning police and government agencies from using this dangerous software," said Evan Greer, director of the advocacy group Fight for the Future, which is working to ban facial recognition nationwide.
Other law enforcement agencies may or may not follow the New Jersey attorney general's lead and ban Clearview's use, but the Garden State ban is "a productive first step in a conversation," said Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University School of Law.
"What I took away from the New Jersey attorney general's order is that we ought to put the brakes on government uses of facial recognition technology until we've really had this heart-to-heart with ourselves — how much do we trust the government will use facial recognition technology only for good purposes for the rest of time?" Goldman said.
The answer to that question could prompt steps to control or even ban government use of the technology, including any outsourcing to third-party vendors, Goldman added.
Clearview AI, its attorney and Kirenaga Partners, one of its primary investors, did not respond to requests for comment. But on Jan. 27, the company posted a new code of conduct on its otherwise sparse website stating that "Clearview AI's search engine is available only for law enforcement agencies and select security professionals to use as an investigative tool, and its results contain only public information."
The company added that the app "has built-in safeguards to ensure these trained professionals only use it for its intended purpose."
Most facial recognition programs currently in use allow law enforcement to compare images of suspects to databases composed of mug shots, driver's license photographs and other government-issued or -owned photos and are usually confined to the state they operate in, according to Chief Inspector Jorge Campos of the Gainesville, Florida, Police Department, which uses Clearview.
But Clearview's database, which is national in scope, includes photographs culled from Facebook, Twitter and other social media sites and apps, according to the company's website, which claims to have built that database using "proprietary methods to collect publicly available images from various sources on the internet."
Those "proprietary methods" appear to include "scraping" images from websites like Facebook, according to Goldman.
"One of the main questions we have about Clearview is how do they assemble the database they claim to have," Goldman told Law360. "It's possible but improbable that someone who scraped millions of photos off the internet did so without breaking the law. Possible, but improbable."
Some websites like Facebook do allow third-party developers to collect data with permission, Goldman pointed out. "So, one possibility is that Clearview is going through the front door as opposed to some side or back door," he said.
The company doesn't appear to have such permission from Twitter, though, which sent Clearview a cease-and-desist letter demanding that it stop collecting images from the social media app.
Twitter declined to comment beyond pointing to the company's terms of service, which state that "scraping the services without the prior consent of Twitter is expressly prohibited.”
But a Ninth Circuit ruling in September may make it more difficult for Twitter and others to keep Clearview off their platforms, Goldman warned. That ruling in hiQ v. LinkedIn said LinkedIn could not keep data analytics startup hiQ from scraping publicly available member profiles for information.
While some worry about the implications the size and scope of Clearview's database has for privacy, that scope is its attraction for law enforcement.
Clearview aggressively markets itself to police departments and law enforcement agencies, claiming it has contracted with as many as 600 such agencies and that its "technology has helped law enforcement track down hundreds of at-large criminals, including pedophiles, terrorists and sex traffickers," according to its website.
It's unclear how many New Jersey law enforcement agencies have used the app, and several New Jersey county prosecutors' offices and police departments either declined to comment or did not return requests for comment. The New Jersey governor's office also did not respond to a request for comment.
But the police department in Gainesville, Florida, has successfully closed "many" cases using Clearview, Campos said. The GPD has used the app for over 2,000 searches since inking a $10,000, one-year contract with Clearview in September, according to Campos.
"In our testing ... Clearview AI has given us more possibilities to investigate than the other software that we demoed has done," Campos said.
And Clearview isn't breaking the law by scraping the internet for images, at least in Florida, Campos claimed, since under Florida case law, anything a person uploads to a website or app falls in the public domain. So, people have no right to privacy regarding those images, he said.
Campos insisted that only 5% to 8% of the searches GPD has conducted using the app have returned matches from social media sources, with most of those matches coming from booking photographs and other government-produced images.
There are also checks and balances in place, he adds. Only 13 individual GPD officers have access to the system, which logs all searches, the devices making the inquiries, and the reasons for the searches. Each quarter, the department audits those records and submits the information to the state.
"A lot of people are jumping all over this and they're coming to, I think, knee-jerk reactions as to how nefarious it is," Campos said. But Clearview is doing "overall good ... in making these identifications. And it's not just suspect identifications, it's victim identifications as well."
Others besides the New Jersey attorney general have serious qualms about Clearview, though.
A class action filed in Illinois federal court in January accuses the company of violating the privacy and constitutional rights of Americans, as well as Illinois' Biometric Information Privacy Act, by illegally scraping their images from the internet without their consent.
"Anyone who is taking someone's biometric identifiers and biometric information without that person's consent is very problematic for the basic reason that a person can't change their biometric identifier," said Scott Drury of Loevy & Loevy, one the attorneys who filed the suit. "You can get a new Social Security number, you can get a new credit card number, but this is your face."
The controversy over the use of facial recognition technology on social media was at the fore of another case on Wednesday, as Facebook agreed to a record $550 million settlement to resolve a different class action brought by Illinois users who claimed the social media giant breached BIPA by using facial recognition without their consent as part of its tag suggestion feature.
Drury said that while the New Jersey attorney general's ban of Clearview likely won't have an impact on his case, it might prove important in other ways.
"It indicates that governments are becoming aware of the problem, the real big problem, of working with a company like Clearview to surveil Americans," he said.
Have a story idea for Access to Justice? Reach us at firstname.lastname@example.org.
--Editing by Katherine Rautenberg.