Addressing Facial Recognition Tech's Discriminatory Potential

By Laura Jehl and Kari Prochaska
Law360 is providing free access to its coronavirus coverage to make sure all members of the legal community have accurate information in this time of uncertainty and change. Use the form below to sign up for any of our weekly newsletters. Signing up for any of our section newsletters will opt you in to the weekly Coronavirus briefing.

Sign up for our California newsletter

You must correct or enter the following before you can sign up:

Select more newsletters to receive for free [+] Show less [-]

Thank You!



Law360 (June 29, 2020, 6:02 PM EDT) --
Laura Jehl
Kari Prochaska
Facial recognition systems, or FRS, are automated or semiautomated technologies that analyze an individual's features by extracting facial patterns from video or still images. FRS use attributes or features of an individual's face to create data that can be used for the unique personal identification of a specific individual.

FRS use has grown exponentially in recent years. In addition to widespread adoption by law enforcement agencies, FRS are also frequently used in retail, banking and security sectors, such as airport screening.

Particularly in recent weeks and months, legal and technical issues associated with FRS have come to the forefront, including concerns that the technology lacks accuracy in identifying nonwhite individuals and that its widespread use by police departments may play a role in racially discriminatory policing.

In response to the global COVID-19 pandemic, public health agencies and private sector companies have considered ways that FRS might be used in conjunction with proximity and geolocation tracking data to control the disease's spread. Some foreign governments have implemented extensive biometric and behavioral monitoring to track and contain the spread of the virus and have used FRS to identify persons who have been in contact with COVID-19-positive individuals and to enforce quarantine or stay-at-home orders.

By contrast, the use of FRS in the U.S. already faced opposition because of data privacy concerns predating COVID-19 and has encountered increased backlash after the civil rights protests of the past month due to concerns over the technology's accuracy and accompanying questions regarding its use by law enforcement agencies.

Accuracy Concerns

There are currently no industry standards for the development of FRS, and as a result, FRS algorithms differ significantly in accuracy. A December 2019 National Institute of Standards and Technology study, the third in a series conducted through its face recognition vendor test program, evaluated the effects of factors such as race and sex on facial recognition software.

The study analyzed 189 facial recognition algorithms from 99 developers, using collections of photographs with approximately 18 million images of eight million people pulled from databases provided by the U.S. Department of State, the U.S. Department of Homeland Security and the Federal Bureau of Investigation.

The study found disproportionately higher false positive rates for African American, Asian and Native American faces for one-to-one matching, and higher rates of false positives for African American females for one-to-many matching. The effect of the high rate of false positives for African American females put this group at the greatest risk of misidentification.

While law enforcement is encouraged to adopt a high threshold recognition percentage — often 99% — for the use of FRS, in reality police departments exercise broad discretion as to whether to adhere to this standard or use a lower percentage threshold.

Adoption by Law Enforcement

In addition to federal law enforcement agencies, local police departments throughout the country have deployed FRS. Concerns about use — and possible abuse — by law enforcement agencies were highlighted by early 2020 media reports about Clearview AI, an FRS company that purports to have amassed more than three billion images scraped from publicly available social media websites, and which counts many law enforcement agencies as clients. This publicity brought regulatory scrutiny as well.

The Vermont Attorney General sued Clearview AI in March, alleging that the company collects data without Vermont residents' notice or consent and violates Vermont's data broker law through the fraudulent collection of screen-scraped data. In late May, the American Civil Liberties Union sued Clearview AI in Illinois state court, alleging violation of Illinois' Biometric Information Privacy Act.

In addition, Sen. Edward Markey, D-Mass., sent Clearview AI letters requesting additional information in January and April regarding his concerns about the technology. He followed up most recently on June 8 to highlight concerns in response to civil rights protests.

Federal, State and Local Regulation of FRS

There is currently no federal law regulating the use of FRS. In February, Sens. Jeff Merkley, D-Ore., and Cory Booker, D-N.J., introduced Senate Bill 3284, the Ethical Use of Facial Recognition Act, which provides for a moratorium on the use of FRS. The accompanying press release cited the 2019 NIST study and the disproportionate effect the technology might have on persons of color.

One of the bill's purposes is to combat overpolicing in neighborhoods already targeted by law enforcement. The bill would limit the access or use of information obtained from FRS, and the import of FRS to identify an individual in the U.S. (without a warrant) until the date on which Congress implements guidelines governing its use. The bill also would prohibit the use of federal funds by state and local governments to purchase or use images acquired by FRS.

On the state level, early 2020 saw the passage of provisions of the state of Washington's proposed comprehensive legislation governing the use of FRS. Under the new law, the following requirements will be effective on July 1, 2021:

  • State and local government agencies must submit accountability reports on FRS detailing the rate of false matches, data security measures, procedures for testing and feedback;

  • Any decisions that produce legal effects must be subject to meaningful review;

  • Service providers / application programming interfaces must be available for independent accuracy testing; and

  • Appropriate training must be provided for individuals who work with FRS technology.

Following passage of the Washington law, in February, the California Legislature proposed the first bill in the U.S. to regulate the use of FRS for private as well as public sectors. Proponents of the bill argued for the legislation to tackle civil liberty concerns by creating consent and accountability requirements.

The ACLU opposed the bill, stating that in contrast to protecting privacy, the bill "invite[d] tech companies and law enforcement to self-regulate their use of face recognition and place[d] no meaningful restriction on their ability to deploy the invasive technology against the people of California."

In late April, a diverse coalition of civil rights organizations contacted the bill's author to denounce it as undermining community safety. Opponents of the bill were successful, and on June 3, the bill was blocked and held in committee.

Other states are taking a measured approach to analyzing the issues surrounding FRS. The Ohio attorney general empaneled a task force to study the use of FRS, including issues of gender and racial bias. In late February, the task force released a report concluding that the attorney general should:

(i) limit the facial recognition database to trained professionals at the Bureau of Criminal Investigation; (ii) declare a moratorium on the use of live facial recognition; (iii) promulgate a specific standard for when law enforcement may utilize FRS and define the investigative purpose for its use; and (iv) follow guidance from the Facial Identification Scientific Working Group (FISWG) and NIST.

The absence of federal or state regulation of FRS has encouraged local lawmakers to step into the void. In May 2019, San Francisco was the first major city to ban the use of FRS by the police, with other cities in California and Massachusetts implementing bans on the use of FRS by government entities, agencies and law enforcement. Portland, Oregon, considered a similar ban this year, which was extended to include private businesses.

Scrutiny Spurred by the Black Lives Matter Movement

As mass protests took place across the U.S. in response to the killing of George Floyd and a spate of other police-involved deaths of Black men and women, the national dialogue focused on racial inequities, and law enforcement use of surveillance technologies swiftly became part of this discourse. The prevalence of FRS, combined with increasing societal awareness and focus on policing practices, brought significant accuracy and privacy concerns to the spotlight.

To address these concerns, on June 8, Democratic lawmakers in the U.S. House of Representatives introduced the Justice in Policing Act of 2020, a comprehensive police reform bill. In addition to reforms such as banning no-knock warrants and racial profiling, the legislation would prohibit the real-time use of FRS on police body cameras and would mandate study of issues relating to the constitutional rights of individuals on whom FRS is used.

Senate Republicans responded by proposing their version of policing reform, the Just and Unifying Solutions To Invigorate Communities Everywhere, or JUSTICE, Act, on June 17, although that legislation did not specifically address FRS.

On June 18, Sen. Sherrod Brown, D-Ohio, introduced the Data Accountability and Transparency Act of 2020. In addition to focusing on the collection, use and sharing of individuals' personal information, the Brown bill includes a provision banning the use of technology for discriminatory purposes by making it a unlawful practice to use FRS and to collect, use or share personal data obtained from FRS.

Private Sector and Civil Liberties Community Response

The private sector, including leading FRS technology companies, has also weighed in on FRS. On June 8, International Business Machines Corp. released a statement in support of the proposed Justice in Policing Act. IBM indicated that it would no longer offer its general purpose FRS or analysis software, stating: "now is the time to begin a national dialogue on whether and how FRS should be employed by domestic law enforcement."

IBM noted that "vendors and users of AI systems have a shared responsibility to ensure that AI is tested for bias, particularly when used in law enforcement." A few days later, Amazon.com Inc. announced a one-year moratorium on police use of its facial recognition technology, advocating that "governments should put in place stronger regulations to govern the ethical use of facial recognition." Microsoft Corp. followed suit a few days later with a similar ban on sales to law enforcement.

Several civil liberties groups have raised constitutional objections to the secret mass surveillance aspects of FRS, as well as concerns that bias is built into the technology. These groups also have argued that law enforcement's ability to use FRS in certain segments of communities, such as individuals with criminal records or who are undocumented, have a chilling effect on the public's cooperation with the police.

Increased publicity this year regarding some countries' use of FRS to monitor pro-democracy protests and crack down on ethnic minority groups also has caused alarm among civil rights groups, as well as state and federal legislators, at the prospect that such tactics might be used in the U.S.

Against the backdrop of dual national crises and the absence of federal FRS regulation, the debate over FRS and whether the technology should be used at all has gained urgency. In response to the civil unrest and protests, certain government and private sector entities have pressed pause on the future of FRS for now.

While the country engages in difficult conversations about race, discrimination and policing, significant privacy considerations will also factor into the national dialogue.

Correction: A previous version of this article misidentified the second author. The error has been corrected.



Laura Jehl is a partner at McDermott Will & Emery LLP and head of the firm's privacy and cybersecurity practice.

Kari Prochaska is an associate at the firm.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of the firm, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

For a reprint of this article, please contact reprints@law360.com.

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!