racism and gender bias in machine learning

Racism and Gender Bias in Machine Learning, Fixes not Always Easy

Please follow and like us:

Racism and gender bias can easily and inadvertently infect machine learning algorithms. One prime example examined what job applicants were most likely to be hired. However, bias is inherent in any decision-making system that involves humans. 

The algorithm learned strictly from whom hiring managers at companies picked. It based recommendations on who they hired from the resumes and CVs. Later testing revealed that applicants with black-sounding names were less likely to be recommended by the system for positions even if they had the same or similar credentials. 

Chris DeBrusk pointed out in 2018 in MIT’s Sloan Management Review that, “As a best practice, managers must always keep in mind that if humans are involved in decisions, bias always exists — and the smaller the group, the greater the chance that the bias is not overridden by others.”

Many organization goes so far as to hire outside experts to examine and help fix such bias issues.

Some bias issues are quickly apparent such as the older google face recognition algorithms not recognizing non-white and female faces as well as male and white faces. It turns out that the majority of data used to train the algorithm was pictures of white men. 

In a YouTube video, Joy Buolamwini, Rhode Scholar, Fulbright Fellow, and at the time an MIT Ph.D. student spoke at Wired Live UK in London, on November 2-3, 2017, about known issues of racial and gender bias in machine learning. She founded an organization called the Algorithmic Justice League to fight against bias in machine learning.

In a particularly insidious example of racism, AI-based software called Compas was used to predict which criminals would be at the highest risk to reoffend, according to a Sept. 2018 article for Medium.com. A non-profit news organization, ProPublica, critically analyzed the risk assessment software.

These risk assessments were provided to judges in courtrooms throughout the United States. Compas generated conclusions on the future of convicts and defendants for gauging everything from bail amounts to sentences.

The software estimated the probability of the criminal to reoffend based on their answers to 137 questions. 

ProPublica discovered that the COMPAS algorithm was able to predict the particular tendency of a convicted criminal to reoffend. However, with COMPAS, black offenders were evaluated as almost twice as likely as white offenders to be labeled a higher risk but not actually reoffend. 

On the other hand, white offenders were more often labeled as lower-risk of reoffending than black offenders, despite their criminal history.

Image processing and facial recognition have some of the most apparent examples of racial bias. For instance, in 2016, an AI algorithm from Amazon failed to classify images of Michelle Obama, Oprah Winfrey, and Serena Williams as women, according to a Time magazine article in February 2019 by Joy Buolamwini.

This type of racial bias can be fixed with better and more inclusive data sources as long as human checks are put in place. Not all racial and gender biases can be reduced or even eliminated so easily.

Complex Racial and Gender Bias Examples in Machine Learning

Racial and Gender Bias Complexity

Another prime example of racial bias in machine learning occurs with credit scores, according to Katia Savchuk with Insights by Stanford Business.

Despite the fact that federal law prohibits race and gender from being considered in credit scores and loan applications, racial and gender bias still exists in the equations.

That doesn’t have to be the case, according to professor Jann Spiess, an assistant professor of operations, information, and technology at Stanford Graduate School of Business and doctoral student Talia Gillis at Harvard Business School and Harvard Law School. 

Spiess says that in theory, race and gender, “protected” aspects of the lending decision-making process can be removed from the equation.

On the other hand, even after race was removed, it did nothing to reduce the gap between predicted default lending rates among blacks and Hispanics compared to whites. Unfortunately, race tends to correlate with many other factors, such as the neighborhood. 

The researchers discovered that eliminating ten other factors that correlated the most with race, including property type, and education level only slightly lessened the difference in default rate predictions.

“In a world where we have lots of information about every individual and a powerful machine to squeeze out a signal, it’s possible to reconstruct whether someone is part of a protected group even if you exclude that variable,” Spiess said. “And because the rules are so complex, it’s really hard to understand which input caused a certain decision. So it’s of limited use to forbid inputs.”

Devising answers will demand an interdisciplinary approach combing computer science, mathematics, law, policy, philosophy, and others, Spiess said.

References

Buolamwini, J. (2019, February 7). Artificial Intelligence Has a Problem With Gender and Racial Bias. Here’s How to Solve It. TIME. Retrieved from https://time.com/5520558/artificial-intelligence-racial-gender-bias/

DeBrusk, C. (2018, March 26) The Risk of Machine-Learning Bias (and How to Prevent It). MITSloan Management Review. https://sloanreview.mit.edu/article/the-risk-of-machine-learning-bias-and-how-to-prevent-it/

Medium.com: Racial bias and gender bias examples in AI systems. (2018, Sept. 2). Retrieved from https://medium.com/thoughts-and-reflections/racial-bias-and-gender-bias-examples-in-ai-systems-7211e4c166a1

Savchuk, K. (2019, October 28) Big Data and Racial Bias: Can That Ghost Be Removed from the Machine? Insights by Stanford Business. Retrieved from https://www.gsb.stanford.edu/insights/big-data-racial-bias-can-ghost-be-removed-machine

WiredUK. (2018, April 10) We’re Training Machines to be Racist. The Fight Against Bias is On. [Video]. https://www.youtube.com/watch?v=N-Lxw5rcfZg&t=627s

Please follow and like us: