How Bad Is Algorithmic Bias in AI?
  
    Q&A with Shea Holman Kilian, Assistant Professor of Legal Studies, George Mason University  
  
    By Russ Banham    Posted on October 31, 2025
   
   
      
      
  
  
    
Trained on vast amounts of data reflecting and amplifying historical and societal prejudices, artificial intelligence (AI) systems produce a much higher volume of biased outcomes than human decision-making.
  
  
 
      
        
        
          
Outcomes that result in male-dominated hiring, lower healthcare diagnosis accuracy rates for patients of color, higher mortgage rates for minority borrowers, and wrongful arrests of Black males present considerable operational, financial, and reputational risks for businesses.
An oft-cited example of such risks is an active federal class action lawsuit, Huskey v. State Farm Fire & Casualty Company, alleging that an AI algorithm allegedly used by the insurer to flag fraudulent claims disproportionately targeted and delayed payments to Black policyholders.
To understand how pervasive and dangerous algorithmic bias is in AI systems, Leader’s Edge spoke with Shea Holman Kilian, an attorney and policy advocate specializing in gender equity, civil rights, and workplace harassment reform. Holman Kilian is an assistant professor of legal studies at George Mason University, where she serves as associate faculty director of the Schar School of Policy and Government’s Gender and Policy Center advisory board.
The following Q&A has been edited for clarity and concision.
    
      
        Q
        Almost all Fortune 500 companies reportedly use AI automated systems, with many relying on the technology to screen job applicants. Does that give you pause, insofar as the possibility of discriminatory outputs?
       
      
        A
        It’s staggering to me that these platforms are being used for hiring purposes without human oversight, in many cases, to check for possible bias. In a study by the University of Washington of three LLMs (large language models) used to rank résumés, the researchers found significant racial and gender bias. In ranking the résumés, the LLMs favored white-associated names 85% of the time and female-associated names 11% of the time. It never favored Black-associated names over white male-associated names.
       
     
  
    
      
        Q
        By “female-associated” and “white-associated,” do you mean the AI system infers that the person’s name is female or white?
       
      
        A
        That’s correct. A similar inference occurs in AI platforms that use automated facial recognition in an employment context. The shape of the person’s face, hair, makeup, legal name, and other characteristics are put into a simplistic binary to infer either male or female gender, eliminating the opportunity for the person to self-identify. It’s a form of erasure for people who identify as trans or non-binary.
       
     
  
    
      
        Q
        Are the systems skewed toward traditional notions of male and female, as opposed to more nuanced views of people in general?
       
      
        A
        Let me answer that by putting some context around it. Not only does the training data [for AI platforms] perpetuate long-standing biases, but the voice-powered AI assistants and robots we use are also feminized or masculinized. Siri, Cortana, and Alexa, these made-up assistant names, are given feminized traits and voices because we perceive women to be supportive and helpful. Alternatively, AI robots that provide aid in dangerous situations or are built for hands-on, tactical roles are given the names of male gods like Hermes and Atlas. We’re building AI systems to look and act how we think a woman or man should look and act.
       
     
  
    
      
        Q
        So the helpfulness and altruism stereotypically associated with feminine traits and the leadership and authority we associate with masculine traits are embedded in the AI tools businesses use?
       
      
        A
        Yes, they contribute to what we call the “tightrope effect”—the pressure on people in the workplace to conform to a narrow range of behaviors.
       
     
  
    
      
        Q
        What needs to be done to root out AI-fueled biases in hiring and employment?
       
      
        A
        While I believe AI is a great tool, automating mundane and repetitive processes to save time and increase efficiency, the challenge is the training data. Humans create these platforms, and the platforms learn from these humans. Ideally, I would like to see legislation in the U.S. like the EU AI Act, which imposes strict requirements on high-risk AI systems used in employment, law enforcement, and credit scoring. Developers are required to document their procedures for detecting, mitigating and preventing AI biases.
       
     
  
    
      
        Q
        The challenge in the United States, of course, is the lack of an overarching AI law, resulting in a variety of different state regulations.
       
      
        A
        Nevertheless, employers still have to follow federal laws prohibiting employment discrimination. If the AI is continually skewed towards predominantly white men, we lose the value of diverse workplaces and teams.