Colorful confetti and streamers party popper illustration

We raised $10M to help Banks and Lenders identify and correct blind spots in their decisioning systems!

Learn More!

Fairness in Credit Scoring: An Interview with Kareem Sle, Founder & CEO of FairPlay

Share this post

Speaker 1:

Hello, ladies. Ladies and gentlemen, and welcome to our interview with Kareem Sle, founder and CEO at Fairplay. Kareem also also served, uh, in the Obama administration as a chief of staff to the State Department special and role for climate change, where he managed the team that negotiated the Paris a Climate Agreement, then a senior advisor to the CEO of the Overseas Private Investment Corporation, where he helped direct the USA government, government, a 30 billion, uh, dollars portfolio of emerging market investments. He launched Fair Play in 2020 as the world’s first Fair Fairness as a service solution, helping organizations eliminate algorithmic bias to improve reputation, reduce regulatory risk, and increase bottom line. Kareem, thank you for joining us today.

Speaker 2:

Thank you for having me, Boris. Delighted to be here.

Kareem Sle’s Background: From U.S. Government to Fairness in Lending Innovation

Speaker 1:

Absolutely. It’s my pleasure too. Uh, Kari, you have some special, uh, uh, purpose, uh, company. We never had, uh, such a fair play, uh, company, but be, before we go deep into your, uh, topic of, of this interview, could us uh, could you tell us a short story about yourself, your career path, and what brought you to where you are right now?

Speaker 2:

Yes, I have been interested in the question of underwriting inherently hard to score borrowers my whole career, career, uh, people who have no history of having used credit or maybe have had some kind of credit event in their past, like a bankruptcy or a foreclosure. I have been interested in this question of underwriting under conditions of deep uncertainty because my parents were immigrants to America from Egypt, uh, in the early seventies. And like so many immigrants when they arrived in the States, uh, they needed a small loan to start a business. And at the time, they called on every bank, uh, in their community and no one would lend to them. Uh, and so my mother ended up having to take a job working at night to save up money to start that small business. And she had to work jobs that were very threatening to her health, uh, and very threatening to her wellbeing.

So even from a young age, I feel that I understood that credit is such an important factor in the upward mobility of people in a modern economy. Uh, and so I got started working on underwriting hard to score borrowers in frontier emerging markets, Sub-Saharan Africa, Latin America, Eastern Europe, the Caribbean, uh, and I, and I spent several years doing that work, uh, in the Obama administration, uh, at the State Department and the US Government’s bilateral development bank called the Overseas Private Investment Corporation, uh, later called the Development Finance Corporation. And what I learned, uh, was that these, uh, this, uh, career trajectory gave me visibility into the credit underwriting and the risk modeling practices of some of the most prestigious financial institutions in the world. And I was quite surprised to find that even at the global heights of, uh, capitalism, many of the underwriting models were extremely rudimentary.

Uh, oftentimes 20 to 50 variables being used, uh, to build linear models in Excel. But the state of the art now is machine learning and artificial intelligence, more advanced mathematics. Uh, but a challenge of these more advanced systems, uh, is that they rely on the data from the past. And the data from the past, at least in the United States, but I think in many other markets too, uh, is imbued with our unfortunate history of financial exclusion. Uh, in America, we have a history of something called redlining. Redlining was when financial institutions refused to lend in certain predominantly black neighborhoods. And so, uh, the more exposure I got to the underwriting methodologies and the data, the more that I found that these systems exhibit disparities for people of color, for women, and for other historically disadvantaged populations.

The Origins of FairPlay and Its Impact on Fairness in Credit Decisions

Speaker 1:

Mm-hmm. Fantastic. So I believe we will have a very thoughtful conversation about the financial, uh, disparities you are seeing in algorithm based, uh, modeling and the emerging trends. So, can you tell us what is the advantage of Fair, fair Play, uh, in the eyes of your best customers? How does it, uh, fair, how does Fair play, uh, make that all happen?

Speaker 2:

Yes. Well, uh, as you mentioned at the top, we refer to ourselves as the world’s first fairness as a service solution. Our software allows anybody using an algorithm to make a high stakes decision to answer five questions. Is my algorithm fair? If not, why not? Could it be fairer? What is the economic impacts to our business of being fairer? And finally, did we give our declines the folks we rejected a second look to make sure we didn’t say no to someone we ought to have approved. Uh, we work primarily with lenders who want the economic, reputational and regulatory benefits of being fair. Uh, oftentimes, uh, we are able to improve the bottom lines of our customers loan portfolios by re-underwriting the loans they rejected with models and algorithmic systems that are tuned using AI fairness techniques and which do a better job of underwriting populations that are not well represented in the data. And what we find is that 25 to 33% of the highest scoring people of color and women that get declined for loans would’ve performed at least as well as the riskiest borrowers. Most lenders approve. So we like to say that fair play is good for profits, good for people, and good for progress.

Why Fairness in Credit Scoring Is Becoming a Priority in Financial Services

Speaker 1:

Mm-hmm. Okay. Let’s, uh, drill a little bit, uh, around, uh, fairness and, uh, all these credit scores. ’cause lots talk about, uh, when it comes to fairness, uh, uh, I, I, I hear also in Europe, uh, we have, uh, some, uh, uh, issues with credit and fairness and what you have been seeing in the market, why there is massive, massive interest in, uh, your kind of, uh, solution of as a probably algorithmic, um, uh, credit scoring on all, uh, AI ML takes place in, uh, become kind of, uh, ubiquitous, uh, uh, solution.

Speaker 2:

Well, I think that, you know, people can see very clearly across a range of different domains that algorithmic systems can cause harm. And let me give you a few examples that are even outside of financial services. So if you think about, for example, the Facebook algorithm, uh, the Facebook algorithm has the objective of keeping the user engaged mm-hmm <affirmative>. But if you think about it, giving an algorithm merely one objective can lead to all kinds of unintended consequences. So in the course of, in the, in the case of the Facebook algorithm, you know, uh, it might do what it must do to keep the user engaged, even if keeping you engaged means showing you things that are bad for your mental health or showing you things that are bad for society. We see that we observe the same problem in self-driving cars. If you think about it, if, uh, if you gave a self-driving car, the mirror objective of getting the passenger from point A to point B, the self-driving car might do that while driving the wrong way down a one way street driving on the sidewalk, uh, you know, causing mayhem for pedestrians.

So what does Tesla, for example, do to ensure that doesn’t happen? It has to give the neural networks powering self-driving cars. Two objectives, get the passenger from point A to point B while also respecting the rules of the road. And so we, our big insight at fair play, uh, is to apply that same thought process from self-driving cars in consumer finance. So predict who is going to default on a loan while also minimizing disparities for historically disadvantaged groups. And the good news is it works, but it requires you to have as your starting premise and, um, the understanding that left to their own devices, algorithmic systems may tend towards bad, bad outcomes because they are relentlessly seeking to achieve one objective. Uh, and that can cause harm in the process.

How Risk Managers Can Build Fairer Credit Scoring Models

Speaker 1:

Mm-hmm. Interesting. So because we are com community of risk managers, and most of our, uh, listeners are risk compliance managers, um, so if we take a life of, uh, average risk manager, if there is one thing that they should start to prioritize right now, uh, they, that they are not doing currently with, uh, regards to the credit, uh, uh, systems, what would that be?

Speaker 2:

I, I think you have to take much greater effort to de-bias your algorithms before putting them into production. Uh, that means much more rigorous assessment of the data for biases. It also means the use of new modeling techniques, uh, that seek to correct for biases either in the data or in the computations that are done by these risk models. So I, I believe that as part of the, the future of model governance, a key stage will be the de-biasing of algorithms. So I like to say that, you know, just as Google built search infrastructure for the internet and stripe built payments infrastructure for the internet, so too in financial services, will we need to build fairness infrastructure for credit decisioning infrastructure that de-bias our digital decisions in real time.

The Fallacy of Neutrality: Misconceptions in Credit Modeling and Fairness

Speaker 1:

Wow. That’s good. Uh, good. The positioning itself, uh, <laugh> as a Google and, uh, <laugh> as a stripe, this is a strong, uh, strong positioning. So I would like to ask your personal point of view, what is the commonly held belief or major misconception in the world of, uh, credit modeling, uh, that you are kind of strongly disagree with?

Speaker 2:

Yeah, I think that, uh, I like to say that for many years, um, in financial services, we have tried to achieve fairness through blindness. We have a belief that variables can be neutral and objective predictors of credit risk. But I believe that neutrality is a fallacy. And let me give you an example. A variable that we often encounter in credit underwriting is consistency of employment. And you might think to yourself, consistency of employment is a reasonable variable on which to assess the credit worthiness of a man. But all things being equal, consistency of employment will necessarily have a disparate effect on women between the ages of 18 and 45 who take time out of the workforce to start a family. Uh, so the idea that, you know, consistency of employment is neutral and objective is untrue. What’s more, it’s possible for a variable on its own to be neutral and objective, but when combined with other variables to encode information that is impermissible for use in risk modeling.

And let me give you an example. Uh, imagine for a moment that we were building a model that attempts to predict the sex of an individual. And imagine for a moment that I said, okay, I am as an input to that model, I will give you the variable height. Well, you might say height is somewhat predictive of sex because men tend to be taller than women. Of course, there are some very tall women in the world, and there are some very short men in the world. So height, uh, is not a perfect predictor of sex. So what if I told you, okay, Boris, in addition to height, I will give you weight. Well, you might say, okay, weight adds some incremental predictive power to my model, because even at the same height, men tend to be heavier than women due to things like bone muscle density and testosterone, et cetera.

Of course, the problem with a model that seeks to predict sex on the basis of height and weight is that it will classify every child as a woman <laugh>. So what if, so what if I told you, okay, Boris, no problem. Now I, I will give you birth dates to control for the fact that there are children. Now our model for predicting sex is looking pretty good, but if I had told you a moment ago that birth date was predictive of sex, you would have told me I was crazy. But <laugh>, but this is the power of this is an indication of how seemingly neutral variables can appear to be fair on a uni variate basis, but when combined with other variables can encode information that no human could possibly discern by simply looking at those variables. So you asked me what I think the biggest misconception is in credit risk modeling these days. And and my answer to that is, uh, that neutrality is a fallacy. Seemingly objective variables combine in ways to encode information that no human could possibly understand.

Where Fairness in Credit Scoring Is Headed: AI, Data Diversity, and Innovation

Speaker 1:

Well, this was kind of, uh, passionate and very, uh, uh, strong, uh, argumentation. Yeah, <laugh>. Thank you. So where do you think if we summarize, uh, uh, if we summarize, uh, what, where do you think this, the credit modeling as a whole is, uh, heading is going to AI and male models? So what are the trends in the industry and what should we expect from you guys in the future?

Speaker 2:

Look, I think that, um, the incumbent credit modeling methods like linear and logistic regression models work very well on populations where the data is present and correct. But the future of credit underwriting is underwriting populations for whom the data is messy, missing or wrong. And it’s only machine learning models, complex machine learning ensemble algorithms of the sort used by Google in search that are resilient to this kind of messiness, missing this and, uh, and incorrectness, uh, of the data. I think the other big trend that we are seeing is the move from traditional credit bureau data, which tends to be 30 days old, uh, and have a limited set of inputs about an individual towards cashflow underwriting, which is, you know, uh, income in and out of a person’s, uh, bank account, which is a much more real time visibility into their balance sheet. So I think we are entering an age of more advanced analytics and, uh, a, uh, and more diversity in the data sources that are used to paint a picture of an individual’s ability and willingness to repay a loan.

FairPlay’s Growth and Innovation Strategy in the Fair Lending Space

Speaker 1:

Mm-hmm. Fantastic. So, uh, you recently achieved, uh, some kind of good, good progress, uh, uh, with your, uh, series, uh, whatever series B, right?

Speaker 2:

Uh, series A. Yes.

Speaker 1:

Sirius A. So could you please explain, uh, what’s something that you team recently have achieved that, uh, brought you to this, uh, kind of, uh, bench, uh, uh, mark and, uh, what is your approach to innovation?

Speaker 2:

Yes, thank, well, we have, um, been very, very fortunate, uh, in the time since we founded the company to experience rapid growth, uh, including with some of the biggest names in American FinTech. Uh, these are companies that understand that fairness is the future and that fairness can be a competitive advantage from a business perspective. Uh, and so within about two years of founding the company, we have already de-bias 4.3 million loan consumer loans, uh, in America. Uh, and our growth continues, uh, at almost a triple digit clip.

Speaker 1:

Wow.

Speaker 2:

Uh, we were fortunate that that attracted the attention of some very prominent investors here in the United States, uh, who also believe, uh, that the next great FinTech companies will be those who exist to remediate some of the systemic discrimination that exists in the financial services system. Uh, and so we have been fortunate to hit a number of milestones early, uh, helped by some great lenders as partners and helped by some great investors, uh, who see the future of financial services as being, uh, fair and profitable. Uh, with regard to our approach for innovation, we believe very strongly in having a team that is primarily technical. Uh, we mostly data science and software engineers and mathematicians. Uh, much of our team comes from places like Google and Microsoft and NASA and elementary robotics. Uh, and we have, we do something I think that is kind of unique and special, which is to maintain our underwriting edge.

We are constantly reviewing the latest academic papers from places like Stanford and Carnegie Mellon, uh, to understand what are the new modeling techniques, what are the new mathematical approaches, uh, that have as they’re animating purpose underwriting populations that are not well represented in the data, uh, underwriting under conditions of deep uncertainty and correcting for potential biases that are encoded in the data. And so I recommend that to stay at the cutting edge of risk modeling and algorithmic decisioning systems, you have to spend some time reviewing the latest developments, uh, from academia and then working to commercialize and productize those innovations, uh, in the commercial sector.

Final Takeaways Why Fairness in Credit Scoring Matters

Speaker 1:

Great. Uh, so, uh, it’s a, a lot of information. I think you are on the path to, uh, very, uh, very successful company. So maybe if, uh, if you summarize it, if, um, someone who is listening to this interview would like, uh, to walk away with one or two major takeaways, what would it be?

Speaker 2:

Fairness is good for profits, good for people, and good for progress left to their own devices. Algorithmic systems ca and machine learning systems are capable of learning the wrong things, so you have to act with intentionality to harness those systems for good. Otherwise, they may pose a threat either to the consumers you serve, or to the safety and soundness of your institution.

Closing Remarks on Fairness in Lending and Algorithmic Bias

Speaker 1:

Mm-hmm. Fantastic. Kareem, that was, uh, so a wonderful, uh, interview and, uh, uh, this were all my questions. Perhaps if you would like to add something, if I forgot to, something that you would like to add, please go ahead.

Speaker 2:

Thank you, Boris. No, I think it’s been a very useful dialogue. Uh, I encourage all of the risk managers out there listening, uh, to spend more time thinking about the tendency to, of algorithmic systems to, uh, over, over index on, on the data sets on which they’re trained on, and perhaps to augment those systems with second look models, models that effectively seek to check the primary decisioning system, uh, to make sure that the primary decisioning system is functioning, uh, in ways that are fair to consumers, but also good for your business.

Speaker 1:

Fantastic. Kareem, thank you very much for your time. And I believe, uh, we will, uh, continue to work with you maybe in a few months we’ll do something, uh, topic, special topic on, uh, another issue. But because you are such a, a great, uh, speaker and, uh, your, uh, topic of, uh, your company is very important for all, all of us. Thank you again.

Speaker 2:

Thank you for having me, Boris. It’s a pleasure to be with you.

Speaker 1:

Absolutely. Thank you. Bye-bye.

To learn more about how FairPlay is enabling Risk Managers with continuous compliance oversight and on-demand regulatory reporting, schedule a free demo today.

Contact us today to see how increasing your fairness can increase your bottom line.