Colorful confetti and streamers party popper illustration

We raised $10M to help Banks and Lenders identify and correct blind spots in their decisioning systems!

Learn More!
REGFi Podcast poster featuring Kareem Saleh on fairness in lending.

Developing Fairness Infrastructure with Kareem Saleh

Developing Fairness Infrastructure for Lending with Kareem Saleh

Welcome to RegFi: Financial Regulation in the Digital Era

Welcome to RegFi, a podcast series focusing on financial regulation for the digital economy. Advances in technology will drive financial regulation to change more in the next 10 years than in the previous 50. Join us as we explore the challenges and opportunities that lie ahead.

Meet Kareem Saleh of FairPlay: Promoting Fairness in Lending

Hello, this is Jerry Buckley, here with my co-host, Sherry Safchuk. Our guest today is Kareem Saleh, co-founder of FairPlay, a company seeking to promote fairness in the lending business and beyond. Kareem, welcome, and it’s so good to have you with us.

While you’ve clearly moved well beyond your lawyer roots, I hope you won’t mind if I mention that we take pride that in your early career, you were associated with the Buckley Law Firm, which has recently merged with Orrick, where Sherry and I are partners now. We have so little time, and you have so much to share with our listeners. So let’s jump right in.

First, let’s talk about FairPlay. It’s a great story, and there’s a lot to pack into a few minutes. But could you boil down the essentials of the thesis that underlies your work at FairPlay, the methodology you use, and the results you’ve been able to achieve?

All of that in five minutes.

FairPlay’s Mission: Building Fairness Infrastructure for the Digital Economy

Well, thank you for having me, Jerry and Sherry, and Jerry, special thanks to you for giving me a shot as a young lawyer, fresh out of school, at the height of the financial crisis 15 years ago. I’m extremely proud of my time at Buckley, and I’m grateful for the foundation it gave me for my work at FairPlay. The fundamental thesis of our business at FairPlay is that, left to their own devices, algorithms may learn and perpetuate biases against populations who have either been historically underserved or who are not well represented in the data.

Real-Time AI Fairness Tools for Fintech and Banking

So we believe that just as Google built search infrastructure for the Internet and Stripe is building payments infrastructure for the Internet, so too must we build fairness infrastructure for the Internet to debias digital decisions in real time and to prevent the unfairness of the past from being programmed into the digital decisions that will govern our futures. Our software allows anybody using an algorithm to make a high stakes decision about someone’s life to answer five questions. Is my algorithm fair?

If not, why not? Could it be fairer? What’s the economic impact to our business of being fairer?

And finally, did we give our declines, the folks we rejected a second look to make sure that we didn’t say no to somebody we ought to have approved? Some of the biggest names in financial services, Fintech and the banking as a service ecosystem use our tools to automate their fair lending testing and reporting and to double check their decisions with a view to making more good loans and doing more good. Our business is an example of how advances in technology and AI are changing the way that financial services companies need to operate in order to stay both competitive and compliant in the AI era.

And the methodology you use, you’ve described, how long have you had to examine the results? And how are you doing?

Lending Fairness Metrics: Improving Approval and Take Rates

Well, we’ve been at it for three years now. And I’m delighted to report that customers using our technology have been able to increase their approval rates on the order of 10% with no corresponding increase in risk, increase their take rates through optimized pricing on the order of about 13% and increase their fairness to black applicants in particular on the order of 20%. So good for profits, good for people and good for progress.

Yeah, the more people are able to qualify, the better for business. That’s good. It’s really remarkable.

Well, Sherry, your turn.

Legal Compliance: AI, Explainability, and Adverse Action Notices

Kareem, thank you for joining us. Clearly FairPlay is providing the keys to open up credit opportunities to underserved persons. At the same time, while using FairPlay technology can get a borrower over the hurdle to borrowing, the cost of credit for some of these borrowers may be less than the best rate available from a lender.

Given that the Equal Credit Opportunity Act and the Fair Credit Reporting Act require a lender to provide an adverse action notice to a borrower who receives less than the optimal loan terms from a lender, how are lenders using FairPlay dealing with this responsibility? And in that regard, as you know, the CFPB recently emphasized the importance of explainability of underwriting results in adverse action notices, stating explicitly that there is no exception for AI. In situation where FairPlay AI tools are used by a lender for a second look, which qualifies a borrower who might not have qualified using the standard underwriting model, how do FairPlay and the lender deal with the adverse action notice requirements?

Addressing the Black Box Problem with Explainable AI in Lending

Thanks, Sherry. Well, as you know, our belief is, and the law requires, that you give consumers explanations they can understand and ideally act upon in connection with the various decisions that get made as part of the lending process. That can be challenging with AI systems because they have what is commonly known as a black box problem, which is you don’t know why the machine made a decision, you just know that it did make a decision.

But in recent years, there have been a number of advances in algorithmic explainability, not only by our company and specifically by my co-founder, John Merrill, who served in senior AI roles at both Google and Microsoft, but also at other places like Carnegie Mellon and Stanford. And most of these algorithmic explainability techniques leverage new learnings from the world of cooperative game theory that allow you to take a result like a loan denial and decompose it into the constituent variables that led to that outcome and to what extent. That allows you to begin the process of generating adverse action reasons.

Of course, the problem with AI is that it often relies on complicated interrelationships between variables for its predictions, and therefore, knowing what combination of variables led to a result can still require a translation of those variables into reasons that a consumer or a layperson could understand and in a perfect world act upon. And so we’ve spent quite a bit of time trying to make the explanations that are given to consumers who interact with our AI systems actionable adverse action reason codes so they can not only understand what drove the decision, but what can lead what behaviors they might change to allow them to be approved in the future.

You know, that’s so consistent with your goal, and it’s really it’s wonderful that you’re spending the time on that. We’ve talked about this in other contexts before, Kareem, and I have a great admiration for your effort in that area. 

Fair Lending Laws and Second Look Underwriting Models

Of course, FairPlay has the goal of expanding credit opportunities for those who have historically not been able to have access to credit, often as you’ve noted, people who are specifically intended to be protected by the Fair Housing Act or the Equal Credit Opportunity Act or other federal or state laws.

Of course, even those who are not in what we consider protected classes, or we sometimes refer to as protected classes, may benefit from the application of tools offered by FairPlay, particularly if they have some history of credit blemishes or characteristics which traditional underwriting models would call disqualifying. So do you recommend that lenders using FairPlay run all their applications through your Fairness as a service second look? Or in some cases, I guess you embed your system in their underwriting process.

In that case, they would, of course, be running all applications through. If the credit is less than optimal, and do you find that you can not only increase acceptances, but actually improve the rate and terms for some buyers whose applications are given FairPlay reviews?

Fairness Optimizer and Second Look: AI-Powered Credit Decisions

Yes. Our Fairness optimization and second look technologies can be adopted in several ways and across many different kinds of decisions. Why don’t I describe first how these technologies work and then how they can be applied?

We have a tool called Fairness Optimizer, which tunes a financial institution’s incumbent models to be fairer within their risk tolerance. That is, Fairness Optimizer modifies the lender’s existing decisioning system, usually by readjusting the relative weights on the variables that they consider in ways that seek to maximize their predictive power while minimizing their disparity driving effect. We also have a technology that you mentioned called Second Look, which rather than tunes the incumbent underwriting strategy, augments it by double checking a lender’s decisions, using models that have been tuned to be more sensitive to populations that are not well represented in the data.

So fairness optimization in Second Look can be used for any binary decision, like an approved and I decision, or for any continuous decision, like the ones that you mentioned, such as pricing, line assignment, loan term, etc. And as I mentioned earlier, the results from using these techniques have been really remarkable. In practical terms, it’s meant that we’ve been able to increase approval rates, increase take rates through optimized pricing, and increase fairness.

So, again, we like to say that double checking your decisions is good for profits, good for people, good for progress, and can be applied across the customer journey.

And across all customers. Is that the usual practice that is across every customer you apply?

Custom Integration of Fairness AI by Financial Institutions

It really depends on the lender. In some cases, we have lenders who want to adjust the incumbent underwriting strategy. In other cases, we work with lenders who say, hey, look, we understand our core customer really well.

We want you to help us reach these populations that our incumbent methodologies maybe don’t have a lot of exposure to in the past. And so we actually find that there’s a pretty even split between those folks who want to tune their incumbent underwriting model relative to those who want to augment their underwriting model to help them reach populations that they haven’t previously been able to serve well.

Okay, Sherry.

Fair Lending and GSE Underwriting Engines: Working with Fannie Mae and Freddie Mac

That’s really interesting. So Kareem, I want to turn to government sponsored entities like Fannie Mae and Freddie Mac. As you know, most mortgage loans originated in the US are sold in the secondary market, the vast majority to Fannie Mae and Freddie Mac.

And they both have underwriting engines that lenders can use to determine whether a loan will qualify for purchase by one or both of the GSEs. How does FairPlay’s technology relate to these GSE models?

Yes. So we work both with firms that originate loans for sale to the GSEs and increasingly even with some of the GSEs themselves. So with respect to the firms that originate loans for sale to the GSEs, they choose to work with us because even when using the GSE underwriting engines, mortgage lenders want to ensure that the loans that they’re originating will perform well and because there’s still room for discretion and variability in how loans are originated.

So our software allows mortgage lenders to ensure both strong credit performance and that the discretion in the lending process doesn’t lead to unfair outcomes, particularly for hard to score or underserved communities. And so by doing so, we not only help these mortgage originators comply with fair lending laws and regulations, but also to assist them in maintaining good relationships with the GSEs and in building a more inclusive loan portfolio, which increases customer satisfaction. With respect to the GSEs themselves, a few of them are using our software to search for less discriminatory alternatives.

Or you can think of those as fairer variants of their credit underwriting and pricing models.

Early Default Risk and Credit Model Performance

That’s really great. And then my understanding is that during the financial crisis, and even now, if a loan is likely to go bad, it will default within the first year or so of origination. After that, there may be a loss of job or illness or divorce or some other life events that is not predictable that will interfere with repayment.

But early defaults are an indication of underwriting issues. How are loans that were enabled by FairPlay doing during their first year after origination?

Multi-Objective Optimization in AI Lending Models: The Tesla Playbook

Yeah, we are very pleased with the credit performance of the models built with our technology. So far, the delinquency and charge off rates are all within the risk tolerances of the lenders who set them. Of course, the outlook for consumer and small business credit is more uncertain than ever.

There’s been score inflation, price inflation, rising interest rates, the resumption of student loan payments, all of which are making loan applications harder to score than ever. These developments have led to lenders tightening their credit standards and reducing their exposure to certain segments of the market. In our view, the key to successful underwriting in an environment like this is to build and monitor, monitoring is very important, accurate underwriting and pricing models that are also fair.

This means using data and modeling methods that promote inclusion, like cashflow data, and modeling techniques which minimize disparities for protected groups while also accurately predicting default. And here we have taken a page from the Tesla playbook, and let me elaborate a little bit on what I mean by that. All models must be given a target, an objective that they seek to relentlessly maximize.

And so if you think about it, like self-driving cars, well, even before we get to self-driving cars, think about a social media algorithm. Social media algorithm generally has the objective of keeping the user engaged. And it’s going to pursue that objective regardless of whether or not the stuff that it’s showing you to keep you engaged is good for your mental health or good for society, right?

And you see this problem with self-driving cars too, right? If Tesla gave the neural networks that power itself driving cars, the mere objective of getting a passenger from point A to point B, it might do that while driving the wrong way down a one-way street or blowing through red lights or otherwise causing mayhem to pedestrians. And so what does Tesla do?

Tesla has to give the neural networks that power itself driving cars two objectives, right? Get the passenger from point A to point B while also respecting the rules of the road. And I think our innovation has been to kind of take a page from the Tesla playbook and apply that multi-objective optimization technique to consumer credit, right?

Predict who’s going to default accurately while also minimizing differences in outcomes for protected groups. And the good news is it works.

Credit Transparency and Consumer Trust Through AI Explainability

You know, a short while ago, I wrote an article suggesting that lenders should simply pull back the underwriting curtain and let borrowers see exactly what the lender sees as to their credit worthiness, their likelihood of default and so forth. Different lenders have different risk tolerances, but as the 2008 credit crisis revealed, borrowers as well as lenders are hurt when too great a risk is taken on a loan. What do you think of the idea of letting borrowers peek behind the underwriting curtain and see how lenders view their likely performance before they sign up for a loan?

Of course, this would require some attention to explainability, which we referenced earlier. And by the way, this principle could apply in the insurance context, which I know you’re eyeing as another area where FairPlay can be helpful and in other areas as well.

Risks and Rewards of Transparent AI Underwriting

Yeah, Jerry, I think it’s a very interesting idea with a lot of potential benefits. In general, I think it would lead to increased consumer understanding, helping consumers to better understand how their creditworthiness is being assessed and why certain decisions are being made. That should tend to empower consumers to make more informed financial decisions and to take steps to improve their creditworthiness.

I think it should also lead to increased consumer trust. You know, when consumers understand how their creditworthiness is being assessed, they’re more likely to feel that they’re being treated fairly. I think it also has the potential to reduce bias, because making the factors that are considered in a credit decision more transparent can help to identify and address the extent to which certain factors may drive differences in outcomes for certain groups.

Of course, there are some potential challenges that arise from, you know, the making of underwriting details kind of public. The first is that, you know, fraudsters may try to game the system, right, by manipulating their applications or other data that is used for underwriting. This would have the effect of actually making it more difficult for lenders to make accurate assessments about creditworthiness.

In addition, sharing underwriting criteria would, of course, make it easier for a lender’s competitors to copy and mimic one another’s models. And so, some may perceive a reduction in competitive advantage, and it may tend to diminish the interest on the part of some lenders of developing sophisticated underwriting models. So I also think that, like, in order to do this effectively, you would want to share this information in a way that didn’t lead to information overload, right?

We deluge consumers with so many disclosures during the lending process. And I think it’s probably time for us to look at whether all of those disclosures are meaningful and capable of being processed by the consumer and communicated in a kind of easily digestible way. So I think it’s a super interesting idea.

And I think we’ve just got to kind of strike the right balance between kind of consumer trust and consumer protection and also guarding the safety and soundness of these institutions.

A very interesting response. And for the benefit of our listeners, we don’t go over these beforehand. We just ask the questions and Kareem gave us a very useful answer.

You know, it’s kind of like putting explainability before the decision. And how you do that in a way that is understandable by the consumer, as well as respectful of the technology and the IP that a lender has developed would be a challenge. But for the beneficial reasons you mentioned, I think it’s worth exploring that challenge.

The Future of Financial Regulation: AI Dashboards and Robo-Examiners

That’s why I wrote the article, I guess. But that was a very, very interesting discussion. You know, I’m going to take our last few minutes here and go to the fact that our premise for this RegFi Podcast is that in the next 10 years, we’ll see more change in financial regulation than the last 50.

And so the big question, just opening the aperture of our lens here, what are two or three major ways you see the regulatory framework changing in light of AI, blockchain, stable coins, potential central bank digital currencies and so forth? Will bots replace examiners? Will regulatory dashboards allow regulators to monitor the activities of supervised institutions in real time?

You’re a visionary. You came up with an extraordinary technology that you’re making available to the marketplace. Your thoughts on regulation?

Yeah, Jerry, I think that the regulatory landscape is poised for significant transformation in the coming decade. Driven, as you point out, by advancements in technology and the emergence of new financial products and services. And you can get a sense of this, by the way, by just reading the recent executive order on AI that was put out by the White House, which emphasizes the need for responsible and ethical use of AI in various industries, including finance, and which contemplates a future in which the regulators are using AI too.

So the way I think about it is, if AI is the great equalizer, it’s going to level the playing field for everyone, including the regulators. And I think that’s an interesting thought experiment to imagine a world where regulators wield an arsenal of sophisticated AI bots to oversee financial services companies. I mean, imagine the transformation in examination, supervision, enforcement.

You know, with AI at their disposal, regulators would be able to conduct real time supervision by ingesting and analyzing data at a volume and scale and with a level of precision that was previously unimaginable. And AI-enabled regulators would be able to dramatically increase their enforcement. I once had a very senior official at the CFPB tell me that the entire investigative capacity of the CFPB at any one given time, given its current resources, is about 150 cases.

You have to wonder, could AI amplify that by 10x, by 100x? The possibilities are staggering. As for your specific question, I don’t think bots will replace examiners, but I do think bots will augment examiners.

I mean, bots can automate many routine tasks, such as data review and pattern identification. I think it can raise issues to the attention of the examiners by allowing, by essentially being able to automate very in-depth analyses. And yes, I think regulatory dashboards will increasingly become important as examiners adopt real-time monitoring that provides a comprehensive view of the activities of supervised institutions.

So in the age of AI, I think regulators are about to go from being kind of like Sherlock Holmes more to being like Robocop. The future of financial regulation is about to get really, really interesting.

Closing Remarks: The Road Ahead for AI in Financial Services

Thank you. That’s a very interesting insight, and we appreciate it. You know, our time is up, and it’s really a pleasure to have had you with us, Kareem.

To have seen your career and the way it’s progressed has been a great source of pride and satisfaction to all of your friends. So thank you for being with us, and we hope we’ll have you back again sometime.

Thanks for having me, Jerry and Sherry.

Thank you, Kareem.

Thank you for joining us for RegFi. Don’t forget to subscribe wherever you listen to podcasts so you won’t miss an episode. And please take a moment to leave a review.

This will help us improve and will make it easier for others to discover this podcast. Thank you.

To learn more about why banks, credit unions, fintechs, and specialty lenders choose FairPlay to make better AI decisions, schedule a free demo today

Contact us today to see how increasing your fairness can increase your bottom line.