In this first post in a series about the use of AI and algorithmic decision-making technologies in the hiring process, we discuss the risks involved when such technologies "screen out" qualified candidates with disabilities in violation of the Americans With Disabilities Act (ADA) and the Maryland Human Relations Law (MHRL).
Every business owner or hiring manager knows how intimidating and overwhelming it can be to take a deluge of resumes, cover letters, and job applications and reduce it to a trickle of qualified candidates worthy of further consideration. Given the time, effort, and resources involved in screening applicants, employers may welcome any tools that can streamline the process and efficiently deliver a manageable pool of candidates worth a second look. Often, those tools take the form of artificial intelligence (AI), software, and algorithms specifically designed to identify applicants most suitable for a position.
The widespread use of AI and other technology by employers and third-party staffing services has unquestionably increased the efficiency of the hiring process. However, it also raises serious concerns that these tools lead to discriminatory outcomes by excluding qualified candidates through algorithmic decision-making that incorporates and reinforces prejudicial biases.
For years, the Equal Employment Opportunity Commission (EEOC) has been closely scrutinizing the use of algorithmic decision-making tools in the hiring process for precisely that reason. In May 2022, the EEOC issued extensive guidance and lengthy Q&As discussing the discriminatory risks associated with the use of hiring technologies, stating, "Even where an employer does not mean to discriminate, its use of a hiring technology may still lead to unlawful discrimination," and "An employer who chooses to use a hiring technology must ensure that its use does not cause unlawful discrimination on the basis of disability."
How AI and Other Technologies Are Used in the Hiring Process
Employers and staffing agencies may use AI and other algorithmic decision-making technology at multiple points throughout the hiring process, from initial screening through final evaluation. As the EEOC notes, this can include:
- Resume scanners that prioritize candidates who use specific keywords.
- "Virtual assistants" or "chatbots" that ask candidates about their qualifications and reject those who do not meet pre-defined requirements.
- Video interviewing software that evaluates candidates based on their facial expressions and speech patterns.
- Testing software that provides "job fit" scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived "cultural fit" based on their performance on a game or a more traditional test.
Hiring Technology Can Exclude Qualified Candidates With Disabilities
As it relates to employment, one of the fundamental mandates of the ADA is to ensure qualified candidates with disabilities who can perform a job's essential functions are afforded the same opportunity to obtain the position as other applicants. This includes making "reasonable accommodations" for disabled candidates during the hiring process. A reasonable accommodation, in short, is a change in how things are done that helps a job applicant or employee with a disability apply for a job without causing an undue hardship.
One of the problems with algorithmic-based decision-making tools is they often "screen out" and take a disabled candidate out of the running before they can request, or the employer can provide, a reasonable accommodation that would put them on an even footing when engaging in whatever assessments, tests, or evaluations are being used.
According to the EEOC,
"'Screen out' occurs when a disability prevents a job applicant or employee from meeting—or lowers their performance on—a selection criterion, and the applicant or employee loses a job opportunity as a result. A disability could have this effect by, for example, reducing the accuracy of the assessment, creating special circumstances that have not been taken into account, or preventing the individual from participating in the assessment altogether."
Under the ADA and MHRL, such screening out is unlawful if the individual who is screened out can perform the job's essential functions with a reasonable accommodation if one is legally required.
Examples of Unlawful "Screening Out" Candidates With Disabilities
The EEOC identified several examples of how AI can unlawfully screen out qualified candidates with disabilities:
- A chatbot that communicates with candidates through texts and emails might be programmed with a simple algorithm that rejects all applicants who, during the course of their "conversation" with the chatbot, indicate they have significant gaps in their employment history. If a particular applicant had a gap in employment, and if the gap was caused by a disability (for example, if the individual needed to stop working to undergo treatment), then the chatbot may function to screen out that person because of the disability.
- A person's disability prevents the algorithmic decision-making tool from measuring what it is intended to measure. For example, video interviewing software that analyzes applicants' speech patterns to reach conclusions about their ability to solve problems is not likely to score an applicant fairly if the applicant has a speech impediment that causes significant differences in speech patterns. If such an applicant is rejected because the applicant's speech impediment resulted in a low or unacceptable rating, the software may effectively screen out the applicant because of the speech impediment.
- An employer uses "gamified" tests to measure abilities, personality traits, and other qualities to assess applicants and employees. If a business requires a 90 percent score on a gamified assessment of memory, an applicant who is blind and, therefore, cannot play these particular games would not be able to score 90 percent on the assessment and would be rejected. However, the applicant still might have an excellent memory and be perfectly able to perform the essential functions of a job that requires a good memory.
Employers' Liability for Discriminatory Third-Party Software and Technology
The AI, software, and other technologies employers use in the hiring process are most often provided by third-party developers and vendors. Rare is the employer who has the knowledge or time to examine the technology they've bought to ensure the vendor developed it to prevent biased or discriminatory outcomes for candidates with disabilities.
However, the fact that the unlawful discrimination was the product of an algorithm the employer played no role in creating will not spare them from liability. The EEOC has made it clear that "if an employer administers a pre-employment test, it may be responsible for ADA discrimination if the test discriminates against individuals with disabilities, even if the test was developed by an outside vendor."
Similarly, an employer can be liable for a third-party vendor's failure to provide reasonable accommodation for a disabled applicant when administering and scoring a pre-employment test. According to the EEOC, "if an applicant were to tell the vendor that a medical condition was making it difficult to take the test (which qualifies as a request for reasonable accommodation), and the vendor did not provide an accommodation required under the ADA, the employer likely would be responsible even if it was unaware that the applicant reported a problem to the vendor."
How Employers Can Minimize "Screening Out" Risks in Hiring Technology
Given the foregoing risks, employers should carefully evaluate the hiring technology they develop or purchase to ensure it does not unlawfully screen out candidates with disabilities and informs candidates of their right to request – and the procedure for requesting - accommodation. For employers considering AI and other tools developed by a third party, the EEOC suggests they ask the vendor questions designed to aid in that evaluation, including the following:
- If the tool requires applicants or employees to engage a user interface, did you make the interface accessible to as many individuals with disabilities as possible?
- Are the materials presented to job applicants or employees in alternative formats? If so, which formats? Are there any disabilities for which you cannot provide accessible formats, in which case the employer may have to provide them (absent undue hardship)?
- Did you attempt to determine whether the use of the algorithm disadvantages individuals with disabilities? For example, did you determine whether any of the traits or characteristics measured by the tool are correlated with certain disabilities?
In our next post, we will examine in more detail how an employer's obligation to provide reasonable accommodation to disabled job candidates interacts with the use of AI and other algorithm-based hiring technologies and the steps employers can take to ensure their practices comply with the ADA and MHRL. If you have questions about the ADA or MHRL implications arising from the use of AI in hiring, please contact Melissa Jones at Tydings.