This originally ran in Fintech Futures
Financial services companies increasingly rely on AI to make decisions that humans used to make, creating efficiencies for the companies and lowering costs.
Where these decisions are customer-related, these customers are now at the mercy of algorithms. In theory, this should be a good thing.
Algorithms don’t feel emotions and therefore make decisions based on hard facts and data points, which means that human traits of conscious and unconscious bias should not feature. And yet, it appears that AIs have become an extension of the humans that have programmed them and carried their biases through.
I recently read a fascinating article in Time about Uber’s problems with AI.
Uber uses AI-driven facial recognition to verify drivers. However, some drivers say they found themselves locked out of the Uber app because the AI deemed them to be fraudulently trying to access it.
According to the drivers and trade union the Independent Workers’ Union of Great Britain (IWGB), the problem seemed to be that the facial recognition technology had trouble with darker skin tones.
In a recent conversation with Kareem Saleh from a start-up called Fairplay™️, I was confronted by the harsh realities of AI-driven bias in the financial services sector.
Kareem showed me a series of infographics illustrating lending decisions made for home loans in the US. The data source is the lenders themselves, legislated to collect and report ethnicity and gender as part of the process.
Fairplay™️ has collated all the available data and uses it to power a dashboard that shows down to county level lending decisions. The data shows a shocking bias based on ethnicity and gender. The negative bias is particularly acute for Black people, although Hispanic and Native Americans do not fare much better. Women are also more likely to be disadvantaged than men.