The AI boom has transformed everything from music production to assembly lines to computer animation. One of the most significant and controversial new applications of AI involves job applications and hiring. Even before the release of popular AI tools like ChatGPT, hiring was highly automated. Many companies have adopted increasingly sophisticated intake tools, language analysis algorithms, and other features to move candidates through the process and find the best new hires. These tools have raised a wide range of issues, including allegations of illegal bias, which point to a broader issue with automation — one that new forms of AI have not solved.
As awareness around AI has grown, government officials at every level have started to take notice. The reasoning is simple. Although these technologies promise to be transformative, it isn’t difficult to see how they could be applied in a way that might cause harm. Even industry leaders have issued grave warnings about the dangers of AI misuse, and the importance of legislative and technological guardrails. This notoriety is starting to have an effect, and governments are sure to be watching this technology closely as it becomes more enmeshed in the technological status quo. Here are some of the ways it is affecting hiring, and what governments are doing about it.
Automation had issues long before AI
It’s important to keep in mind that AI is not “new” in the sense of being a wholly unfamiliar technology — it’s better understood as a much more sophisticated and fast-moving version of tools we have already been using. You can see this in the recent history of machine learning tools designed to help optimize the hiring process. Nearly a decade ago, large cutting-edge technology companies such as Amazon were working to create machine learning tools designed to rapidly improve the hiring process. However, after years of running these tools, they discovered that the results had gender bias, which led them to pause the program.
The way these tools are supposed to work is to rapidly ingest information and find patterns that enable them to make decisions at a speed and scale not achievable by humans. However, if the information has certain characteristics or biases, the decisions are going to reflect that as well because the machine does not have the power to reason these out. In this example, because the tech industry has a long-standing issue with gender bias towards male job candidates, the machine learning tools “learned” that this was the gender the company would prefer to hire based on the data they were fed.
There were other issues as well, including mass rejections of candidates based on such factors as a time gap in a resume or a lack of highly specific wording in a job title. These issues led to broad frustration among workers because they felt automation was not a trustworthy or effective evaluation tool and that they preferred human interaction.
AI has the potential to lead to more of the same
Because modern AI tools operate in a similar way, many are concerned that the wide-scale adoption of AI could lead to major issues with bias, discrimination, and other problems. As a result, legislators are starting to follow the technology closely. Earlier this year, the EU, which has been highly proactive about technological legislation over the last decade, put forward the EU AI Act, which promises to address the safety, transparency, fairness, and environmental impact of AI. It also creates a set of classifications of AI data collection and processing, including generative AI (GAI) like Chat GPT.
More recently, the New York City Department of Consumer and Worker Protection created new rules specifically aimed at bias in automated employment decision tools (AEDT). These rules require that organizations in New York City using these tools must undertake third-party audits to uncover any bias in the job search and hiring process.
This is the first in what many are predicting will be a legislative barrage aimed specifically at AI. In April 2023, several laws were proposed in California that were supposed to create working groups and an analysis of the way AI was used in the public and private sectors. Additionally, AB 331 was proposed in order to enable citizens who experienced bias as a result of AI hiring to pursue a lawsuit against the organization. Although it did not pass, similar laws are already in place in Illinois, Maryland, and Washington, DC.
Although no federal law currently governs AI in hiring or AI in general, there have been many indications of a bipartisan interest in getting a better understanding of AI and creating a regulatory framework for its use. Several congressional hearings and press releases from the Federal Trade Commission (FTC) and other federal agencies show both a desire to foster AI innovation as well as an awareness of the risks it poses. In 2022, the White House Office of Science and Technology Policy (OSTP) released a Blueprint for an AI Bill of Rights, which is a non-enforceable set of suggestions and principles for ethical and legal AI use.
Stay on top of new technologies without undue risk
The 501(c) Services team consists of career-long HR and unemployment professionals with decades of experience in the nonprofit world. Our mission is to help you achieve your organization’s goals by optimizing your HR processes and uncovering new resources you can use. If you’d like to learn more about our team, how we work, and how we can help you, please get in touch.
ABOUT US
501(c) Services has more than 40 years of experience helping nonprofits with unemployment outsourcing, reimbursing, and HR services. Two of our most popular programs are the 501(c) Agencies Trust and 501(c) HR Services. We understand the importance of compliance and accuracy, and we are committed to providing our clients with customized plans that fit their needs.
Contact us today to see if your organization could benefit from our services.
Already working with us and need assistance with an HR or unemployment issue? Contact us here.
(Photo by Pavel Danilyuk, Ron Lach)
The information contained in this article is not a substitute for legal advice or counsel and has been pulled from multiple sources.