Adding to the growing commentary on artificial intelligence (AI) in the employment context, the Equal Employment Opportunity Commission (EEOC) recently issued a technical assistance document.
Titled “Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” the agency focuses on the assessment of whether an employer’s “selection procedures”—the procedures it uses to make employment decisions such as hiring, promotion and firing—have a disproportionately large negative effect on a basis that is prohibited by Title VII, often referred to as “disparate impact” or “adverse impact.”
In 1978, the EEOC adopted the Uniform Guidelines on Employee Selection Procedures (Guidelines), which provide employers with a road map for how to determine if their tests and selection procedures are lawful for purposes of Title VII disparate impact analysis.
The Guidelines would apply to algorithmic decision-making tools when they are used to make or inform decisions about whether to hire, promote, terminate, or take similar actions with respect to applicants or current employees, the agency said.
Employers can assess whether a selection procedure has an adverse impact on a particular protected group by checking whether use of the procedure causes a selection rate for individuals in the group that is “substantially” lower than the selection rate for individuals in another group.
The Guidelines reference the “four-fifths rule,” which is a general rule of thumb for determining whether the selection rate for one group is “substantially” different from the selection rate for another group. The rule states that one rate is substantially different from another if its ratio is less than four-fifths, or 80 percent.
However, the EEOC cautioned that compliance with the four-fifths rule in the context of AI may not equate to compliance with Title VII.
“[T]he EEOC might not consider compliance with the rule sufficient to show that a particular selection procedure is lawful under Title VII when the procedure is challenged in a charge of discrimination,” according to the technical assistance.
And because employers can be responsible under Title VII for the use of algorithmic decision-making tools even if the tools are designed or administered by a third party, employers that consider working with a vendor may want to ask specifically whether it relied on the four-fifths rule or on a standard such as a statistical significance that is often used by courts, the agency suggested.
If an employer discovers the use of an AI decision-making tool would have an adverse impact, the EEOC urged employers to take steps to reduce the impact—or select a different tool in order to avoid engaging in a practice that violates Title VII.
“One advantage of algorithmic decision-making tools is that the process of developing the tool may itself produce a variety of comparably effective alternative algorithms,” the EEOC wrote. “Failure to adopt a less discriminatory algorithm that was considered during the development process therefore may give rise to liability.”
The agency also encouraged employers to conduct self-analyses on an ongoing basis to determine whether their employment practices have a disproportionately large negative effect on a basis prohibited under Title VII or treat protected groups differently.
To read the technical assistance document, click here.
Why it matters
Employers should use caution when considering the use of AI for employment decisions, as multiple federal authorities have spoken out about concerns with regard to AI and employment decision-making. In addition to the EEOC’s technical assistance document that is focused on the potential for liability under Title VII for an employer’s selection procedures, the EEOC and the Department of Justice (DOJ) released guidance to avoid running afoul of the Americans with Disabilities Act, and the EEOC, the DOJ, the Federal Trade Commission and the Consumer Financial Protection Bureau issued a joint statement expressing concern about the potential for unlawful discrimination by AI systems .