This originally ran in Fintech Futures
Financial services companies increasingly rely on AI to make decisions that humans used to make, creating efficiencies for the companies and lowering costs.
Where these decisions are customer related, these customers are now at the mercy of algorithms. In theory, this should be a good thing.
Algorithms don’t feel emotions and therefore make decisions based on hard facts and data points, which means that human traits of conscious and unconscious bias should not feature. And yet, it appears that AIs have become an extension of the humans that have programmed them and carried their biases through.
In a recent conversation with Kareem Saleh from a start-up called Fairplay™️, I was confronted by the harsh realities of AIdriven bias in the financial services sector.
Kareem showed me a series of infographics illustrating lending decisions made for home loans in the US. The data source is the lenders themselves, legislated to collect and report ethnicity and gender as part of the process.
I asked Kareem what the best approach was to solve the problem. He responded that “the first thing to do is a diagnosis”. Kareem told me that Fairplay™️ has an analysis tool that analyses a bank’s existing lending software for signs of discrimination. It tries to answer the following questions:
- Is the algorithm fair?
- If not, why not?
- How could it be fairer?
- What’s the economic impact to the business of being fair?
- Do applicants who are rejected get a second look to see if they might resemble favoured borrowers?
Unpicking bias in AI is a whole new fintech opportunity and one that appears to be very needed. So, if I were an institution, I would be looking carefully at my algorithms and AI, asking Kareem’s five incredibly sensible questions, and then doing something about it!