Home

Ethics in Artificial Intelligence—CSC484

Professor Clark Elliott

Racial and Gender Bias inherent in AI systems:

Modern Neural Networks and Machine Learning rely on training from examples and on pattern-matching in large amounts of existing data. This is how they work. But consider the following ethical problem:

We have seen huge and impassioned protests around the United States in 2020 over the acknowledged fact of systemic racism. Along with racial biases our country has systemic gender biases built in as well. Regardless of how one might feel about the larger issues, and the solutions proposed for these very complex issues, some of the data-supported science we must consider in an AI ethics class is clear.

For example, consider film lighting for black skin which has been negatively biased for decades. Lighting, and indeed camera technology itself, is tweaked to make actors with white skin look good. This is only compounded by on-set tweaking when both white and non-white actors appear together in a scene, and decades-old lighting norms are followed.

Let's consider, for a moment, racial biases that show up only in subtle ways—flying beneath the radar so to speak. For example, without being overt, bias can show up in the language used in social media postings, and in the subtle components of expression in face images that go along with the the unfavorable lighting biases.

But now we have an additional problem: AI programs are getting very good at mimicking what is and generating new composite versions of the same. If we train, say, our advertising system—that will generate novel artificial faces and artificially created ad copy—on vast collections of existing texts and faces from social media posts, we will inherently copy all of the subtle biases built into the originals as well.

So now we have systemically racist (and gender-biased) artificially-generated novel posts, faces and ad copy.

This presents an ethical problem that does not have an easy technical solution. Humans are adept at picking up very subtle social clues. If the existing data biases us against Person X because of a few millimeters of downturn in the left corner of the mouth in a face image, or an average of a N lumens difference in the lighting for the photo—creating slightly different lighting patterns on the face (with subtle negative attributions)—this bias will be extracted from the training data. But because of the nature of neural networks there is no way for us to directly see—or even to know—that these biases have been built into our system. These biases fly under the radar as well.

Now we are building artificial Person Xs as part of our ad copy that look like other Person X's the system has "seen" and, yes, once again have all of the original biases built in to them in the projected social cues and the accompanying text.

(And yes, you might have missed the reference which emphasizes our point... look closely at the previous sentence.)

People who are currently at a bias disadvantage should be very wary of the coming pattern-matching AI if these biases are not explicitly addressed.