AI Unfairly Targets Marginalized Communities

AI Unfairly Targets Marginalized Communities

Technology like artificial intelligence is often assumed to be progressive, but always reflects the biases of the people who make it. CNNreports that AI, which is used in voice recognition applications like Siri and in tools like Google Translate, disproportionately hurts people of color and poor people. Artificial intelligence picks up on patterns in data it is given, so, as Fei-Fei Li, the director of the Stanford Artificial Intelligence Lab tells CNN, "In AI development, we say garbage in, garbage out. If our data we're starting with is biased, our decision coming out of it is biased."

This has been proven with PredPol, a tool that predicts where crimes occur and which has disproportionately sent police officers to neighborhoods with high populations of people of color. Similarly, Google image searches for words like "CEO" have historically shown almost exclusively images of men, and facial recognition software regularly has trouble recognizing people of colors' faces.

Related | How Afraid of AI Should We Be?

People in tech are taking steps towards reducing bias, including hiring more diverse designers and establishing a code of conduct. Rumman Chowdhury helped developed the Fairness Tool, which searches data sets for biases. He says that designers often think they can avoid bias if they simply don't include gender or race in their models, but that it is actually much more complicated than that. He tellsCNN, "Every social scientist knows that variables are interrelated. In the US for example, zip code [is] highly related to income, highly related to race. Profession [is] highly related to gender. Whether or not that's the world you want to be in that is the world we are in."

Photo via Getty