The federal government is warning employers that artificial intelligence (AI) programs used to recruit and onboard new hires must comply with civil rights laws.
In October, the U.S. Equal Opportunity Commission (EEOC) announced a new initiative to investigate and ensure that AI programs, which are used by recruiters and HR departments to streamline hiring processes, comply with federal anti-discrimination laws.
The intersection of AI and HR is a hot industry: a recent study from the Sage Group found that 24% of businesses have started using AI for acquiring talent with 56% of managers planning to integrate AI over the next year.
Cashwise, the market for AI is expected to reach $52 billion by 2024, an 80% increase from 2019.
Human resources departments use AI in a bevy of ways: from recruiting, selecting, and onboarding to training, performance management, and retention. And while AI can theoretically reduce decision-making bias and make recruiting a more efficient process, concerns regarding the actual fairness of these algorithms are well documented.
In 2015, machine-learning specialists at Amazon learned that an AI program they developed to review job applicants’ resumes “taught itself” that male candidates were preferable, and penalized resumes that included the word women.
And in 2019, the digital recruiting company HireVue became the subject of an FTC complaint regarding its use of AI facial recognition during interviews. While the company claimed that its AI tools could measure “cognitive abilities,” “psychological traits,” and the “emotional intelligence” of job applicants, the FTC argues that these claims were invasive, unproven, and prone to bias.
EEOC chair Charlotte Burrows said that while AI and algorithmic decision-making tools have great potential to improve our lives, they pose a real risk of perpetuating discriminatory hiring practices.
“The EEOC is keenly aware that these tools may mask and perpetuate bias or create new discriminatory barriers to jobs,” Burrows said. “Bias in employment arising from the use of algorithms and AI falls squarely within the Commission’s priority to address systemic discrimination.”
The benefits of using AI for hiring
The most obvious benefit of using AI for hiring is its ability to quickly and thoroughly sort through massive numbers of resumes.
AI is immune to the mental fatigue that might strike a hiring manager faced with evaluating hundreds of applications, and automating these processes allows employers to spend their time and energy on other matters. One study found that recruiters lose an average of 14 hours manually completing tasks that could be automated, which could explain why 96% of recruiters believe AI could enhance their talent acquisition strategy.
AI programs can attract and seek out job applicants from all corners of the internet — and some programs can even predict if an employee is likely to leave their old job to take a new one. Above all else, AI speeds up the pace of the hiring process, which ultimately makes it less expensive.
The concerns of using AI for hiring
Regulators, legislators, and job applicants alike are increasingly concerned with the lack of transparency in how AI tools work and worry that it could cause more problems than it solves — particularly when it comes to perpetuating unconscious bias in the hiring process.
While algorithmic tools appear to be entirely evidence-based and immune to human partiality, there’s increasing evidence that this may not be the case.
Algorithms work by learning from past data, which means that the decisions made by AI programs may reflect and repeat past bias, as seen in the 2015 Amazon case study.
For example, if an employer has never hired a candidate from a historically Black college, an algorithm might learn to prefer candidates from other schools.
Moving towards transparency and fairness
In recent years, state leaders have taken the matter of policing AI into their own hands.
Illinois has laws regulating the use of AI in video interviews and requiring companies to provide notice to candidates that AI will be used to analyze their interviews, obtain consent from interviewees before the interview, and explain to them how the AI works. New Jersey, New York City, and Washington state have introduced similar legislation.
Now, the EEOC will “examine more closely” how technology is fundamentally changing the way employment decisions are made and work to guide applicants, employees, employers, and technology vendors in ensuring that these technologies are used fairly and consistent with federal equal employment opportunity laws.
As part of the new initiative, the EEOC plans to:
- Establish an internal working group to coordinate the agency’s work on the initiative
- Launch a series of listening sessions with key stakeholders about algorithmic tools and their employment ramifications
- Gather information about the adoption, design, and impact of hiring and other employment-related technologies
- Identify promising practices; and
- Issue technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions.
“While the technology may be evolving, anti-discrimination laws still apply. The EEOC will address workplace bias that violates federal civil rights laws regardless of the form it takes, and the agency is committed to helping employers understand how to benefit from these new technologies while also complying with employment laws,” Burrows said.
ABOUT THE AUTHOR
Lia Tabackman is a freelance journalist, copywriter, and social media strategist based in Richmond, Virginia. Her writing has appeared in the Washington Post, CBS 6 News, the Los Angeles Times, and Arlington Magazine, among others. She writes weekly nonprofit-specific content for 501c.com.
Image by Gerd Altmann from Pixabay.