Insights

How Artificial Intelligence Can Perpetuate Gender Imbalance

Artificial intelligence is on the cusp of a revolution that could reshape businesses, cities, and governments. We spoke with Ege Gürdeniz, Dean of the AI Academy, Oliver Wyman’s AI education platform, to explore how AI could affect financial services firms and their customers. Oliver Wyman's Elizabeth St-Onge and Madeline Kreher also contributed to this article.

With algorithms or with machines, there's this misconception that they are perfect and they can't have a human-like flaw

What do we mean by gender bias in artificial intelligence?

Ege Gürdeniz: There are two components to Artificial Intelligence (AI) bias. The first is an AI application making biased decisions regarding certain groups of people. This could be ethnicity, religion, gender, and so on. To understand that we first need to understand how AI works and how it's trained to complete specific tasks.

The second is a bit more insidious and involves how popular AI applications that we see in use today are propagating gender stereotypes. You’ll notice, for example, that most of the virtual assistants powered by AI have women’s voices, and the most powerful computer in the world, Watson, is named for a man. 

How is human bias transmitted into AI?

Ege Gürdeniz: Even though it sounds like these machines have a mind of their own, AI is just the reflection of our decisions and our behavior, because the data we use to train AI is the representation of our experiences, behaviors, and decisions as humans. If I want to train an AI application to review, say, credit card applications, I have to show it previous applications that were approved or rejected by humans. So, you're really just taking human behavior and codifying it. 

Companies can use AI to identify existing issues that are not visible or detect emerging issues, similar to an early warning system. They can surface biases and take action accordingly

How does AI bias manifest itself in financial services?

Ege Gürdeniz: AI applications are generally trained using data that are generated by humans, and humans are inherently biased. And many organizations are also biased in their historical behavior.

Let’s say you want to train AI applications to review mortgage applications and make lending decisions. You would have to train that algorithm using the mortgage decisions that your human loan officers have made over the years. So, let's say I'm a bank that has made thousands of mortgage loans over the past 50 years. My AI machine will learn from that data set to figure out what factors to look for and how to decide to reject or approve a mortgage application. Now let's take an extreme example and say, historically, 90% of applications from men I approved, but whenever women applied, I rejected their application. That's embedded in my data set. So then if I take that data set and train an AI application to make decisions on mortgage applications, it's going to pick up that inherent bias that is in my data set and say, "I shouldn't approve mortgage applications that are by women."  

In financial services, it's not that long ago that there were legal obstacles for women in obtaining some of these products. What data set do you choose when a few decades ago women still had trouble opening bank accounts in their maiden name, without their husband? Even if you remove obvious gender indicators from the data set, there could be patterns that highly correlate with gender, such as the fact that there is a gap in someone’s income history around women’s childbearing years. Women still have lower approval rates for pretty much all credit products. They have lower access to funding in the small business space as well.

At the end of the day, if our objective is to increase women's participation in finance and more advanced financial services and investments, then the things I described to you will be totally counterproductive.

You've talked a lot about how AI mimics the patterns of humans over the years. How are the gender-bias risks of AI different from the risks that humans themselves pose and FS?

Ege Gürdeniz: There's not a consistently established understanding of what AI bias means and how it could affect people. Compounding that problem, when you're interacting with humans you know humans have biases and they're imperfect, and you may be able to tell if someone has certain strong biases against someone or a certain group of people. But with algorithms or with machines, there's this misconception that they are perfect and they can't have a human-like flaw.

And then there’s the issue of scale…

Ege Gürdeniz: The scale is massive. Whereas in the past maybe you had one loan officer who was rejecting five applications a day from women, now you might have this biased machine that's rejecting thousands of applications from women. A human can only do so much damage, but in the context of AI, there's no limit. 

So what should financial services companies do to rein in the risk of AI perpetuating gender bias or stereotypes?

Ege Gürdeniz: Because the issue is very new and it's evolving, the answers are also new and evolving and partially made more complicated by the fact that no one really knows where AI is going to be in two years, five years.

But I think there are general guidelines. On the data side, an AI application is really just taking an input and processing an output, so you should think hard about what you're using it for because that's what you can control. If I'm using it for mortgage applications, I should look at the data I'm training this machine on, i.e. my historical mortgage applications that were reviewed by humans, and the gender distribution of the overall population versus approved population versus the rejected population, to see if I have a fair, balanced data set.

And then, in terms of the design point I mentioned earlier, ensuring diversity and inclusion on product design, data science, engineering teams and actively applying a gender lens to these processes will help avoid gender bias or stereotypes. Within the wealth management space, for example, what can appear to be a gender-neutral approach may actually be geared toward the default man. The right way to design a product would be to have a diverse team and say, "Women are going to use this product, so we shouldn't just assume X, Y, Z."

What opportunities does AI present for financial services firms in terms of gender balance?

Ege Gürdeniz: Ultimately, AI represents a powerful set analytical techniques that allow us to identify patterns, trends, and insights in large and complicated data sets. In particular, AI is very good at connecting the dots in massive, multidimensional data sets that the human eye and brain cannot process. Therefore, companies can use AI to identify existing issues that are not visible or detect emerging issues, similar to an early warning system. They can surface biases and take action accordingly. For example, companies can analyze data sets related to recruiting decisions, lending decisions, pricing outcomes and other decisions or behaviors using AI techniques, and identify areas where issues might exist. Training AI on a company’s historical recruiting data and seeing—in a test environment where it cannot have a negative impact on applicants—that the outcomes are biased against a particular group can be valuable information for the company. 

How Artificial Intelligence Can Perpetuate Gender Imbalance


DOWNLOAD PDF

RELATED INSIGHTS