The financial sector has a long history of making inequitable loan decisions.
Redlining, a discriminatory practice that started in the 1930s, is when a bank denies a customer a loan because of their ZIP code. These institutions physically drew a red line around low-income neighborhoods, segregating these residents from any opportunity to borrow money.
Redlining disproportionately affects Black Americans and immigrant communities. This denies them opportunities like homeownership, starting a small business, and earning a postsecondary education.
The rise of machine learning and big data means decisions can be controlled for human bias. But just adopting the tech isn’t enough to overhaul discriminatory loan decisions.
Testing and correcting for bias
Software developers like the US’s FairPlay — which recently raised $10 million in Series A funding — have products that detect and help reduce algorithmic bias for people of color, women, and other historically disadvantaged groups.
FairPlay’s customers include the financial institution Figure Technologies in San Francisco, the online-personal-loan provider Happy Money, and Octane Lending.
One of its application-programming-interface products, Second Look, reevaluates declined loan applicants for discrimination. It pulls data from the US census and the Consumer Financial Protection Bureau to help recognize borrowers in protected classes, given financial institutions are forbidden to collect information directly about race, age, and gender.