Colorful confetti and streamers party popper illustration

We raised $10M to help Banks and Lenders identify and correct blind spots in their decisioning systems!

Learn More!
Bust wearing headphones with 'THE PNYX' text overlay

Debating AI’s Future on the PNYX Podcast with Vikas Raj of ResilienceVC

In this podcast episode, we explore critical AI topics from misinformation and bias to AI's potential as a great equalizer in education and healthcare. The discussion includes insights from a conversation with Senator Chuck Schumer about the challenges of crafting effective AI regulation.

Welcome to this episode of The Pnyx. Everyone is talking about AI and its potential in every single world or part or application. I think this episode’s a lot of fun, thinking about specifically financial services and financial inclusion.

Fair Lending Compliance in the Age of AI: A Conversation with FairPlay’s Founder

Vikas, since you led the interview, why don’t you tell us a little bit about the guests and what you’re excited about talking about?

Yeah, thanks Alex. This is definitely one we’re excited about. And I’ve known Kareem Saleh from Fairplay AI for a really long time.

And I love what his business is trying to achieve. AI, as you said, it’s thrown around a lot everywhere in the world, and particularly in the financial services space that you and I both think a lot about. But Kareem actually gets in to the details a bit.

He gets pretty specific about what he can do for financial services, what the risks are, where this is all headed. He talks in some detail about what he’s already been able to do in terms of expanding credit access for Americans and the risks of AI encoding the unfairness of the past into the future.  So we get into a lot of those issues and we have a bit of a discussion.

And I’m pretty excited about this one. So yeah, let’s get started. I’m excited to introduce everybody to Kareem.

Bringing Underwriting into the Age of AI

All right, welcome, Kareem, to The Pnyx.

Thank you so much for being here. Thanks for having me.

So we’re gonna start the discussion just hearing a little bit about you and the history of Fairplay AI. Can you tell us about your background? What were you doing before you started Fairplay AI, and how did you come up with the idea for the business?

Sure, well, again, thanks for having me. I have been working and interested in the problem of underwriting inherently hard-to-score borrowers my whole career. I got started doing that work in frontier emerging markets like Sub-Saharan Africa, Eastern Europe, Latin America, the Caribbean.

And I was, you know, got started basically financing development-friendly projects in frontier emerging markets. And that gave me visibility into the underwriting practices of some of the most prestigious financial institutions in the world. And what I was quite surprised to find is that even at the commanding heights of global  finance, the underwriting methodologies were extremely primitive, especially, you know, compared to the mathematical approaches that were being used in Silicon Valley.

At the same time, not only were the underwriting methodologies primitive, but I observed that they all exhibited disparities for people of color, for women, towards other historically underserved groups. And that was a bit surprising to me because I’m actually trained as a banking lawyer. And I know that there are a set of laws that prohibit disparities in lending and actually make it severely punishable under the law.

And so I got interested in this question of how is it possible on the one hand that discrimination in lending is illegal and severely punishable under the law, and at the same time ubiquitous. And I came to the realization that there were, the financial institutions rely on what I jokingly refer to as the fair lending industrial complex, which is basically a set of fancy consultancies and law firms that financial institutions rely on to come up with clever legal and statistical justifications for the disparities in their life. At the same time, my co-founder was at Google, and we started experimenting with applying complex  machine learning techniques of the sort that Google uses in search to credit underwriting.

And a few years ago, it must have been actually six or seven years ago now in 2017, we were doing some work with a major mortgage originator, right around the time that we started to see the emergence of things like deepfakes. Yeah. And deepfakes, you might be aware, they’re powered by a machine learning technique called adversarial networks.

And my co-founder kind of had the inspired idea of saying, hey, what if we applied adversarial networks to credit underwriting models? Could we de-bias the credit underwriting models? And so we persuaded this large mortgage originator to allow us to apply adversarial de-biasing techniques to their credit model. 

And we found that the mortgage originator was going to be able to increase its approval rate for black consumers by something on the order of about 10% with no corresponding increase in risk. For this mortgage originator, that meant many billions of dollars of additional credit originated, and it meant like 50,000 more black families and homes. So it was an outcome that was good for profits and good for people and good for progress.

And so that was kind of the eureka moment where we said, hey, the basic fair lending testing and reporting that is currently done, I would say with a low level of rigor and at a low frequency that that was going to need to change in the age of AI. That biased testing and fairness optimization was going to have to be a much higher rigor, higher frequency transaction in a world of advanced predictive models and alternative data. And so that’s when we started the company about three and a half years ago.”

We’re called Fairplay AI and we cheekily refer to ourselves as the world’s first fairness as a service company.

From Compliance to Impact: What Fair Lending Should Look Like in Practice

That’s an amazing statistic on the increased approval rates for that mortgage lender. Can you sort of, I mean, what was happening there? It wasn’t necessarily that mortgage lender’s intention to keep those borrowers out.

What was like the flaw or the misunderstanding that was allowing that to happen?

Yeah, it’s a great question and it’s a great point, which is to say, for the most part, the disparities that we encounter in decisioning systems, whether they’re in lending or in other domains, are not the result of people of bad faith building those models. They’re largely due to limitations in data and mathematics. And so what we see is, in the case of that particular lender, they just had an over reliance on conventional credit scores, right?”

So I would say that, you know, they were using, let’s call it 20 variables to make a credit decision about someone and kind of conventional credit scores represented about 70% of the decisioning factors that they took into account to make an underwriting decision. And the remaining 30% were made up of other variables that are commonly found on a credit report. And what we found was that we could tune down the reliance on the conventional credit score from let’s call it 70% to 50% and tune up the influence of these other variables, which were similarly predictive from a risk perspective, but had less of a disparity driving effect.

And so what we really just did was optimize the relative weights on the variables in ways that maximize their predictive power, but minimize the adverse outcomes for protected groups.

How AI Is Changing the Landscape of Fair Lending Compliance

So I want to talk about AI is part of the solution for this, but as I said, I’m wondering if it was also part of the problem, or if maybe said it differently, AI is solving something that an older version of AI or ML started to cause. When you think about what AI innovation is allowing  for now, and let’s talk about Fairplay AI in particular first, what is generative AI letting you do that you couldn’t do before?

Well, let’s take a step back and just say that AI is about replicating human capabilities across a range of potential applications. And so for years, we’ve been using predictive AI in financial services. So can we predict or rank risks the same way a human would?

Robotics is all about, can we replicate human physical behaviors? Generative AI is about, can we generate creative outputs that are similar to what a human would, right? Whether it’s, can we generate music like a human?

Can we generate text like an author? Can we generate images like an artist? And I think that what’s changed in the world is that to first order, the cost of computation has essentially become free and infinite.

At the same time, we have essentially internet scale data, right? They say that, there’s a famous quote that I’ve heard somebody say the other day, which is that quantity has a quality all of its own. And these systems really require a tremendous amount of data to be able to discern the subtle patterns that kind of give rise to their predictive power.

And so I think across a bunch of domains, we’re seeing both the combination of basically free compute and internet scale data is allowing AI to dramatically increase our ability to replicate human behaviors across a bunch of domains, creative domains, predictive domains, physical domains, etc.

FairPlay’s Approach to Detecting Bias in Lending Models

 So let’s take that specifically into financial services, credit underwriting and more fair credit underwriting. What can AI do?

Yeah. Well, so our solution allows anybody using an algorithm to make a high-stakes decision about someone’s life to answer five questions. Is my decision fair?

If not, why not? Could it be fairer? What’s the economic impact to our business of being fairer?

And finally, did we give our declines, the folks we rejected a second look, to make sure that we didn’t deny somebody an opportunity that they deserved? And so in our case, our customers are using AI to increase approval rates, increase take rates, and increase positive outcomes for historically underserved communities. And what we see is that something like 25 to 33 percent of the highest scoring black, brown, and female folks that get declined by lenders for loans today, would have performed as well as the riskiest folks who get approved.”

So we see AI as kind of bridging an access gap that exists today, and that would possibly get worse if we don’t take steps to make the predictive models that increasingly govern our lives more sensitive to populations that are not well represented in the data.

That makes total sense, and it’s super exciting because it allows you to help these lending institutions meet demand that is there and will generate the same sort of efficient frontier of profitability. Are there risks to using AI to these populations in the credit writing, underwriting space, and then more broadly in financial services in your view?

There is something called the AI incident Tracker, and it keeps a database of AIs that have gone off the rails. And I encourage everybody to go look at that because the incidents that it’s tracking are quite scary. There are a number of concerning AI harms that are befalling people today.”

 We see this in facial recognition, where AI systems are incorrectly identifying consumers in stores as shoplifters or students taking exams as cheaters. We see deep fakes that depict sexually explicit images of minors. We see political dirty tricksters trying to spread misinformation.

And then there are a whole other set of more subtle AI harms which may be difficult to discern and yet which have far reaching consequences. And credit scoring is a good example of that, right? I mean, you know, predictive models that are used for underwriting are designed to regard applicants about whom there is not much information as inherently riskier.

But of course, if you’ve been locked out of the financial system or preyed upon the financial system, as, for example, black Americans have been, then the data that’s available about you is very likely to be messy, missing or wrong. And so if you don’t build safeguards into the AI systems that accommodate for that data bias and then seek to correct for it, you can encode the unfairness of the past into the digital decisions that will govern our futures.”

The Role of Alternative Data to Advance Fair Lending Practices

Have you seen that in action? Are there other examples of where AI’s but its promise has already started to cause that in financial services in particular? I mean, I know we talked about deepfakes and cheapfakes.

Yeah. Well, let me give you one example of a variable that we encounter all of the time in credit underwriting. It’s an alternative data variable, right?

Which is consistency of employment. How consistently is the applicant deployed? And if you think about it, consistency of employment is a perfectly reasonable variable on which to assess the credit worthiness of a man.

But all things being equal, consistency of employment is going to have a disparity driving effect for women who take time out of the workforce to start a family, right? So maybe what we ought to do is tell these models, hey, you will sometimes encounter a population of people in the world called women, and women will sometimes exhibit inconsistent employment. So maybe before you decline an applicant for inconsistent employment, you should do a check to see if they resemble good applicants on other dimensions you didn’t heavily consider.”

And what we find, as I think I said earlier, is it’s something like, you know, 25 to 33 percent of the highest scoring black, brown and female folks who get declined would have performed as well as the riskiest folks who get approved.

The Business Case for Investing in Fair Lending Technology

Very interesting, very interesting. I’m sort of curious, your view, just given that you’re thinking a lot about AI in your sort of sweet spot around credit underwriting, if you’re starting to see how it’s playing a role in other parts of financial services and interacting with consumers and improving, essentially lowering operating expenses for expenses for financial institutions, and what are the other places where you see AI making an impact?

You know, well, let’s just like start from the premise that AI is hugely deflationary, right? It dramatically reduced the cost of delivering expertise across a range of different applications. And so, you know, that has tremendous power to, you know, improve outcomes for low and moderate income Americans, for other communities perhaps that have not been historically well served, right?”

It allows, you know, governments to leverage AI, to improve public services and streamline administrative processes, right? You see it in agriculture, right? Where AI-driven tools are optimizing crop yields and reducing resource usage for farmers.

You see it in education, where AI is providing personalized learning experiences. And of course, we’re seeing it in financial services. I mean, imagine that each of us had an AI angel, right?

Guiding us to be our best financial selves, right? You know, an AI financial advisor to analyze our spending habits, suggest budgeting strategies, negotiate bills on our behalf, assess our credit worthiness more fairly, more precisely, paint the finer portrait of our ability and willingness to repay a loan. Heaven forbid that folks end up in a default or a charge-off situation, AI can help us structure loan workouts that are personalized and likely to be accepted and fulfilled.”

 So I think at every step of the customer journey, we are seeing the potential for AI to deliver better experiences, fairer experiences more cheaply.

The Evolving Landscape of AI Regulations in Financial Services

What do you think about REX, Kareem, regulations? I mean, obviously the current administration has put out sort of some key AI actions over the last several months. To the extent you’re able to talk about it, how do you see regulations and potentially policy affecting your work in AI more broadly?

Yeah. Regulation is one of the toughest questions in AI, right? I mean, so given the potential for AI systems to create widespread harm, there’s a natural tendency to say, hey, we need some rules to the road here.

We need safeguards to prevent misuse and ensure AI is used ethically. And in certain domains like financial services, we have model governance regimes that regulate the use of predictive models and algorithmic systems. And those regimes actually do a pretty good job, though there’s always room for improvement.”

At the same time, we’re in the very early stages of AI development, and overregulation could have unintended and detrimental consequences, right? So consider the example of nuclear power, where stringent regulations have made it nearly impossible to build new nuclear plants despite the urgent need to address climate change, and despite ample evidence, the nuclear power can be operated safely if you have the proper controls in place. And so a major concern that I have and that others have articulated in the drive to regulate AI is about regulatory capture, right?

We don’t want a cartel of big tech companies controlling AI for the next 30 years.

Right.

By the way, the drive to regulate alcohol a century ago, which led to prohibition is actually quite informative on this point, right? So, you know, social reform movements are generally driven by two forces. And Mark Andreessen gives a great talk about this.”

And he talks about basically the true believers that he calls like the Baptists who fear that like widespread social harm will arise from the use of a technology and advocate for regulation. But at the same time, you have these, what he calls the bootleggers who are kind of the cynical opportunists who exploit regulation and laws, you know, for regulatory capture and for their own benefit. So, you know, in the case of alcohol, the bootleggers ultimately co-opt the Baptists, leading to temperance laws that were kind of optimized for the benefit of the bootleggers, not for the public good.

And today, the modern bootleggers aren’t criminal gangs, they’re legitimate businesses who are seeking government protection from competition. And so you basically have big tech companies that are waging a war on little tech companies and want regulations that create monopolies or cartels to make it hard for new entrants to operate. This is unfolding in Congress right now.”

And so I think the central issue facing our policy makers is, will the government anoint a cartel of tech companies that control AI for the next 30 years? Or are we going to have a market of free competition? And I think we need to be very careful not to create regulatory frameworks that unfairly favor established players over new innovators.

And we need to strike a delicate balance with smart, adaptive regulations that prevent misuse but also without stifling innovation. I was with Senator Chuck Schumer the other day, and he confessed that getting AI regulation is probably one of the most difficult things facing Congress right now.

Yeah. We’re almost out of time. Let me ask two final questions and give me quick responses.

What’s Next for Fair Lending Compliance and AI Regulation

Yeah, I want to think about the alarm bell and the aspiration here. If I wake up tomorrow and I get a call from my mom and it’s actually AI, that’s probably my cue to unplug and move to a cave, or at least call my congressman or whatever it is. What is the alarm bell here where we need to get concerned about the influence AI is having on credit underwriting and  even more broadly?

Then what’s the aspiration? How good could this get? What’s your big vision for Fairplay AI and how AI can play a critical role in making people’s lives better?

Yeah. So with respect to the alarm bells, that is a really spicy question that provokes a lot of disagreement. Right?

So on the one hand, we don’t want fear-mongering to overshadow the potential benefits of AI in areas like healthcare and scientific discovery and education and problem-solving. On the other hand, you’ve got some founding members of the AI community, folks like Jeff Hinton, who’s commonly referred to in the media as the godfather of AI, who see real possibilities of AI causing doomsday in apocalyptic scenarios. Somebody asked this question to Eric Schmidt, the former chairman of Google the other day, and he expressed a fear that in a world of billions of AI agents, they might develop their own language and communicate with each other in ways that human might not understand.”

And that’s when he believes we should pull the plug. Tim Wu, who coined the term neutrality and advises president biden on technology issues, has also similarly said that he thinks we need a kill switch for AI systems to prevent them from running amok. On the other hand, you’ve got folks who say that these doomsday scenarios actually distract from the very real harms that AI is doing today.

And you have a set of critics out there who argue that the AI technology is still far from the level of sophistication that it poses existential risks, that AI systems are for the most part, are designed for narrow specific tasks, and the lack of general intelligence and autonomy to create widespread harm. And they worry that kind of focusing on doomsday scenarios can divert attention and resources from more immediate and realistic concerns, such as ethical AI use, bias, privacy, etc. And so they want more of our focus to be kind of spent addressing current AI challenges that are affecting people’s lives today.”

For my part, I think the alarm bells will start to ring when we start to see widespread job displacement without adequate kind of social safety nets. I think when we see significant increases in AI driven misinformation, of course, when we see persistent and increasing bias in AI systems, and then I think when we start seeing AI being used to enhance surveillance and erode privacy at scale, as frankly is kind of unfolding in China right now.

What about the aspiration?

Yeah. I mean, the aspiration is, I’m optimistic that AI has the potential to greatly benefit everyone. Think about it this way, like I think I said earlier, AI drives down the cost of expertise dramatically.

That means that AI is a powerful equalizer. It means that everyone is going to have an AI health coach to monitor and track their health and suggest personalized workouts and diet modifications. And everyone’s going to have an AI therapist to provide mental health support and emotional well-being.”

And that’s going to make health care more accessible and affordable. You know, I think, imagine a world in which a Stanford education cost a penny, right? Imagine that printing a house cost a penny.

Imagine that getting treated for prostate cancer cost a penny. Imagine that each of us had an AI angel on our shoulders, guiding us to be our best selves, right? Our best educated selves, our most productive selves, our best physical selves, our best financial selves, our best emotional selves.

I think that is the, that’s the future that we foresee at Fairplay AI. And we believe that as algorithms take over, higher and higher stakes, decisions in people’s lives, that just as Google built search infrastructure for the internet and Stripe built payments infrastructure for the internet, so too will we need to build fairness infrastructure for the internet to de-bias digital decisions in real time.”

That’s a great place to end. Hey, Kareem, thank you so much for spending some time with us on The Pnyx. This has been a terrific conversation.

Thanks, Ben.

Thanks for having me.

Key Takeaways from Our Conversation with FairPlay’s Founder

That was great. And we so appreciate Kareem spending some time with us. Any quick thoughts, Alex?

Look, I think the bit that was very expected to me is the excitement around AI and its potential for financial inclusion. The thing that I enjoyed in the conversation was a little bit of a reflection. The old adage, keep your friends close and your enemies closer, is the role of AI within regulation and actually being able to figure out, is my model fair?

Does that work? What are the risks and how do you actually fight against them? The protectionist element, not just the narrative around what risks AI creates.

I really enjoyed that element of the conversation. I thought it was really insightful.”

Yeah. No, I think that’s a really good point. AI is both the risk and the solution in some ways.

If we can be on the right side of it, obviously, there’s a huge opportunity to really help people. Clearly, Kareem’s thinking about those things and working on those things, so I’m really thrilled that he was able to join us and thrilled that you all could listen. And we’re excited to welcome you next time.

In the next, thanks everybody. Until then, thank you.

If you’re looking to increase AI adoption in your underwriting process without increasing risk of non-compliance, schedule a free demo of FairPlay’s fairness as a service platform today.

Contact us today to see how increasing your fairness can increase your bottom line.