Candidate screening and hiring are two of the many business processes that are rapidly incorporating artificial intelligence (AI) tools. On a very basic level, this makes a lot of sense: Job seekers typically have to apply to between 20 and 80 jobs to get an offer somewhere, and the average job listing receives 250 applications which means a huge volume of applications, cover letters, and resumes. Automation tools have become an indispensable time-saver for Human Resources (HR) teams because they empower them to streamline the screening process and identify the candidates best suited for the new role.
A major part of meeting your goals means working to ensure that your organization’s resources are being used for the best possible outcomes, including hiring. Allowing recruiting professionals to avoid a very time-consuming screening process and focus on their more in-depth responsibilities is one way to more efficiently allocate resources. However, it is also critical that you understand and assess the work that the AI tools are doing to ensure that they are not replicating discriminatory or ineffective recruitment practices.
What is recruitment AI?
To be clear, automation has been part of the hiring process for a lot longer than the recent chat-based AI boom. Tools that remove candidates who do not provide adequate documentation of experience, that schedule interviews, and that pull public data about candidates are not new — and not without their own problems. Because the internet has made it easier to apply to large numbers of jobs quickly, recruiters have turned to automation to assist them.
The form of AI that is seeing unprecedented levels of investment is highly sophisticated and can do more than simply pass or fail candidates. The preponderance of AI language modeling has led to chat programs that can “interview” candidates and generate assessments, rank the candidates, and remove any who don’t meet a certain standard. These tools are trained using huge datasets, in effect “learning” from past practices and taking a time-consuming step out of the recruiters’ hands.
Whatever form and combination of automation or chat-based recruitment effort you use, it’s critical that you understand how these tools work and what data they were trained on and continually assess their performance. Here’s how to avoid bias and discrimination from AI recruiting tools:
Adopt carefully
If the promises of its creators are to be believed, chat-based AI is a transformative technology that will upend “business as usual.” However, it should receive the same scrutiny as any other new technology. Although the time- and cost-saving potential of these tools might be attractive, it’s a good idea to gather information from stakeholders about exactly what they need and don’t need from a new recruiting tool.
Rather than trying to automate every facet of your hiring process, you can incorporate these tools slowly and only in places that assist in, rather than replace, your team members’ work. Overextending your dependence on this new technology without understanding how it works or whether it meets your standards could end up costing you.
As with any new technology, AI is bound to have plenty of issues, and it remains unclear how effective it will be over the long term. Taking your time enables you to view the benefits holistically, rather than solely in terms of immediate benefits.
Understand what the AI tools are doing
As mentioned above, AI and automation tools “learn” from large datasets of hiring data and then make decisions based on that information. However, this makes them as susceptible to problematic or illegal hiring practices, biases, and outcomes as a person who learns from that data.
AI tools can rapidly gather public internet data about candidates to create “profiles,” because of that the assessment can extend well beyond the candidates’ qualifications. Although many AI evangelists claim these tools eliminate bias, research has shown the opposite. This means you must get an in-depth understanding of what the AI is doing when it assesses a candidate.
Seemingly innocuous practices such as analyzing a candidate’s photograph on a recruitment site like LinkedIn can lead to gender, racial, or religious bias. Moreover, some recruitment tools gather data from social media and other nonprofessional sources and score candidates accordingly.
Understanding what data is being used to assess candidates is essential because it allows you to iterate and improve the technology to reduce biased outcomes. You can measure your efforts against the White House Office of Science’s recent Blueprint for an AI Bill of Rights, which provides a set of principles and rights that should be afforded to anyone using or being assessed by AI.
Provide transparency internally and externally
If you do start to incorporate AI into your hiring process, it’s also essential to work with the AI vendor as well as your recruitment team to gather data about the performance of the tools in question. This means building a feedback system that asks why certain candidates get turned away, why others advance, and whether the system delivers a range of candidates who reflect your values and who are being assessed only on their merits as employees. You can also present your findings to the recruiting team or organization as a whole to foster a transparent understanding of the process and elicit feedback for better performance.
Externally, letting candidates know that they are participating in an automated process and giving them information about the tools is not only a great way to foster trust, but it also gives them insight into what personal information is and isn’t being considered for their application. Also, they will be more willing to enthusiastically participate in an AI interview or assessment if you explain why you are using it.
501 members and subscribers have unlimited access to HR Services. Contact us anytime regarding this subject or any other HR challenge you may be facing.
Need HR help for a low monthly fee? Contact us today.
The information contained in this article is not a substitute for legal advice or counsel and has been pulled from multiple sources.
(Photo by Edmond Dantès)