
The Double-Edged Sword of AI in HR

Artificial intelligence has become the go-to technology for employers to optimize employee recruitment and promotion processes, for good reasons.
AI tools automate repetitive and time-consuming HR tasks like screening thousands of resumes to select job candidates with the best fit, winnowing down the volume in a couple of seconds. Decisions involving job promotions are equally fast and even objective, as AI focuses explicitly on data pertaining to employee skills and performance assessments, reducing the potential for human bias.
While the time and effort saved is worth the investment in AI, the downsides require careful consideration. AI algorithms may be trained on biased data, leading to discriminatory job candidate and promotion outcomes, while an overreliance on the technology reduces the decision-making value of softer human skills like judgment, intuition, and empathy. Terrific employment candidates and employees may be overlooked for jobs and promotions.
Employers are well aware of the pros and cons, but their understanding may fall short regarding the potential of AI in HR to generate employment practices liability (EPL) claims. “Most lawsuits to date involve the creators of AI tools,” says Mary Anne Mullin, senior vice president and fiduciary and EPL product leader at QBE North America. “We’re anticipating claims to trickle down [to employers].”
Insurance brokers agree. “We haven’t seen a rise yet in AI-related employment discrimination claims, but it’s certainly possible down the road and is something that employers are thinking about as more of them start to use AI in an HR context,” says Talene Carter, senior director of employment practices liability for WTW’s North American practice. “The potential for bias has not gone away.”
The first AI-related discrimination case was settled by the Equal Employment Opportunity Commission (EEOC) in 2023 for $365,000. The lawsuit involved an online English tutoring service whose recruitment software was allegedly programmed to automatically reject female applicants over age 55 and male applicants over 60.
The EEOC in 2023 issued new guidance on the use of AI to reduce the possibility of bias and discrimination. Among the employer guidelines is the so-called “four-fifths” rule in hiring and promotions. A selection rate for any group based on race, ethnicity, or gender that is less than 80% of the group with the highest selection rate may indicate disparate impact.
“The upshot of the EEOC’s guidance is that if an employer decides to use AI, it’s on you if there’s discrimination in the output. You can’t push it onto a third-party AI chatbot,” says Emily Loupee, area senior vice president in insurance broker Gallagher’s executive and financial risk practice.
Options for reducing the risk of bias and discrimination claims include using AI bias detection tools to ferret out other tools’ potential discriminatory patterns and regularly auditing the AI training datasets to ensure accuracy and changing demographics. Most important is to resist using so-called “black box” AI tools that lack transparency into their decisions, increasing the possibility of hidden biases.