Steps To Reduce Title VII Risks When Hiring With AI

By Kevin White and Daniel Butler
Law360 is providing free access to its coronavirus coverage to make sure all members of the legal community have accurate information in this time of uncertainty and change. Use the form below to sign up for any of our weekly newsletters. Signing up for any of our section newsletters will opt you in to the weekly Coronavirus briefing.

Sign up for our Compliance newsletter

You must correct or enter the following before you can sign up:

Select more newsletters to receive for free [+] Show less [-]

Thank You!



Law360 (July 21, 2020, 3:07 PM EDT) --
Kevin White
Daniel Butler
The use of artificial intelligence in hiring has been growing steadily for years, and the onset of COVID-19 will undoubtedly accelerate that growth. But employers must be mindful that reliance on AI raises unique issues under Title VII of the Civil Rights Act, as amended.

Automation in the workplace has expanded well beyond the factory floor. Businesses across the country increasingly rely on AI to source and hire candidates, supplanting functions previously performed by human resources professionals.

With the arrival of COVID-19, traditional candidate sourcing activities such as job fairs, college campus visits and networking events have yielded to more digital means, such as targeted job advertisements. And given the staggering number of reductions in force that COVID-19 brought about, as employers recover financially, they will have to hire en masse.

AI tools could be well suited for this situation. They allow employers to gather and process large amounts of data much faster than humans, all the while observing social distancing to protect against the spread of COVID-19.

While instinct may suggest that illegal bias and prejudice are uniquely human and not an issue with AI, the reality is that computerized decision-making processes can also involve illegal discrimination — intentional or unintentional. If left unchecked, use of AI can lead to decisions that discriminate on the basis of race, color, national origin, sex and religion in violation of Title VII.

These risks are not theoretical. In October 2016, the U.S. Equal Employment Opportunity Commission, the agency charged with enforcing Title VII, convened a public hearing on the use of big data in the workplace. The panel of employment law, information technology and human resources management experts made it clear that employers should proceed cautiously with the use of AI. In addition, state legislatures, Illinois in particular, have enacted or are considering legislation aimed at regulating the use of AI.

Below we address in more detail some of the tools employers are using and the risks those tools can raise under Title VII. We conclude by listing several steps employers can take to mitigate those risks.

What are common AI tools available to employers?

There are numerous technological tools at employers' disposal to use in the sourcing and hiring process. Two of the most commonly used technologies are automated candidate sourcing and intelligent resume screening.

Automated candidate sourcing technology is designed to identify candidates for employers and then encourage those candidates, through targeted advertising, to apply for specific jobs. The goal is to help employers identify the best pipelines to source talent.

This technology can be a win-win for employers and job seekers. Job seekers will receive advertisements for jobs that align with their interests and capabilities and employers will receive a more relevant pool of candidates.

Similarly, intelligent resume screening can save companies hundreds, if not thousands, of hours reviewing resumes of job applicants. This technology will search through volumes of resumes and identify the best candidates for open roles.

What are the risks under Title VII?

Understanding the risks with AI requires a brief refresher on disparate treatment and disparate impact discrimination under Title VII. Simply put, disparate treatment focuses on the employer's intent while disparate impact focuses on the effect of the employer's facially neutral practice.

To prevail on a disparate impact claim, the employee must prove that the employer's neutral practice has a disproportionately negative effect on employees in a protected class. The employer can in turn challenge the employee's analysis or establish that the practice is job-related and consistent with business necessity. Even if successful, however, the employer may still ultimately lose the claim if the employee can show that the employer refuses to use a less discriminatory practice.

It is less likely that an AI tool would be designed to intentionally discriminate — disparate treatment — against individuals based on their protected Title VII status. Rather, the more likely risk lies in the tool unintentionally discriminating — disparate impact. For example, one of the biggest hurdles with automated candidate sourcing technology is that it is only as unbiased as the training data upon which its algorithms rely.

Most algorithms are designed to learn and evolve over time, a concept sometimes referred to as machine learning. As a starting point in the process, data, such as resumes of a company's high-performing employees, is put into the machine.

The machine then searches for candidates who mimic the traits of the existing high-performing employees. Naturally then, if those employees are overwhelmingly a certain race or gender, the automated candidate sourcing technology could favor individuals with those characteristics. The technology may, therefore, reinforce existing institutional homogeneity if the employer is not careful.

The same is true of intelligent resume screening. If the employer's workforce lacks diversity, and the tool is set to find applicants who are like employees in the homogeneous workforce, then the tool could perpetuate that lack of diversity.

That is to say, if the workforce is largely comprised of one race or one gender, having the tool rely on past hiring decisions could negatively impact applicants of another gender or race. And given the ability of these tools to quickly assess large amounts of data, the number of individuals impacted in a short period of time could be significant.

What should employers do?

AI can be a powerful instrument for employers when hiring and the Title VII risks do not mean employers should shy away from the technology. But they do have to proceed with caution and take steps to protect the company. We suggest employers consider the following steps.

First, employers considering an AI tool should learn as much as possible about how the tool works. This sounds easier than it is.

A significant challenge stems from the black-box nature of the tools and the algorithms upon which they operate. Vendors often do not want to disclose proprietary information relating to how their tools function and interpret data. Nevertheless, employers using such tools could be held liable for their results, so it is important for employers to understand how candidates are selected.

Second, employers should audit the AI tool. Before fully implementing the tool, employers should consider testing it to see if the results it yields have a negative impact on individuals in protected classes.

Along those same lines, employers should inquire of the vendor what type of testing has been done on the tool to mitigate against adverse impact. Further, employers should ensure that the input or training data upon which the tool relies (e.g., resumes of model employees), does not reflect bias in and of itself. If the training data reflects a diverse workforce, a properly functioning algorithm should, in theory, replicate such diversity.

Third, employers should understand as best they can where the data resides and how long it is retained. To defend against a Title VII disparate impact claim, an employer will need access to data. If such a claim is filed, it may become necessary to preserve data beyond the normal retention period.

Moreover, some tools may not retain all data used in the selection process, making it difficult for employers to explain how one candidate was selected over another. Thus, we recommend delineating clearly in the vendor contract the circumstances in which the employer may access data and circumstances under which data is retained.

Fourth, ensure the tool is job-related and consistent with business necessity. In particular, ensure that the tool's underlying algorithm is focusing on skills and traits that relate to the open position. Keep in mind that the workplace has undergone a radical transformation since COVID-19, and a trait or skill that was important in the past does not mean it is necessary currently.

Fifth, employers interested in using AI in hiring must stay abreast of developments in the law. The EEOC has demonstrated its interest in AI, as have state regulators.

AI is a powerful tool for employers. If implemented carefully and with adequate resources, these tools can save employers countless hours and produce a more robust workforce. If, however, an employer rushes to implement such tools, they may find themselves embroiled in expensive administrative proceedings or litigation involving a large class of applicants alleging disparate impact discrimination.



Kevin White is a partner and co-chair of the labor and employment team at Hunton Andrews Kurth LLP.

Daniel Butler is an associate at the firm.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of the firm, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

For a reprint of this article, please contact reprints@law360.com.

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!