How Bad Is Algorithmic Bias in AI?
Trained on vast amounts of data reflecting and amplifying historical and societal prejudices, artificial intelligence (AI) systems produce a much higher volume of biased outcomes than human decision-making.
Outcomes that result in male-dominated hiring, lower healthcare diagnosis accuracy rates for patients of color, higher mortgage rates for minority borrowers, and wrongful arrests of Black males present considerable operational, financial, and reputational risks for businesses.
An oft-cited example of such risks is an active federal class action lawsuit, Huskey v. State Farm Fire & Casualty Company, alleging that an AI algorithm allegedly used by the insurer to flag fraudulent claims disproportionately targeted and delayed payments to Black policyholders.
To understand how pervasive and dangerous algorithmic bias is in AI systems, Leader’s Edge spoke with Shea Holman Kilian, an attorney and policy advocate specializing in gender equity, civil rights, and workplace harassment reform. Holman Kilian is an assistant professor of legal studies at George Mason University, where she serves as associate faculty director of the Schar School of Policy and Government’s Gender and Policy Center advisory board.
The following Q&A has been edited for clarity and concision.




