One of the toughest questions facing lenders and insurance companies today as they adopt AI and big data, is: How should I think about tradeoffs between accuracy and fairness when selecting models?
At FairPlay, we’ve been thinking a lot about this issue.
To properly assess accuracy vs. fairness, lenders and insurers need a framework for assessing whether a seemingly accurate fairer model will perform within their risk tolerance.
Today, we’re pleased to share our thoughts about how to choose a less discriminatory algorithm (LDA).
We call it the FairPlay Framework for Picking a Fairer Model.
It turns out that identifying LDAs that are actually viable is harder than it seems.
It also turns out that picking the wrong LDA can be costly.
To grapple with the potential pitfalls of choosing the wrong LDA, FairPlay has developed a method for evaluating whether seemingly accurate and fairer models will perform as desired in the real world.
FairPlay accomplishes this by simulating LDA candidate performance under many different operating scenarios—because the economy changes, business policies change, and applicant mixtures change.
Our Framework judges LDA viability by assessing:
▶ The less discriminatory algorithm’s profitability across a range of risk tolerances;
▶ Its fairness outcomes at various approval rates; and
▶ An LDA’s profitability and fairness outcomes under other scenarios and market conditions that might plausibly occur—such as changes to marketing programs.
Testing an LDA candidate in these ways ensures that its fairness and business outcomes will withstand changes in borrower populations and business policies, including when underwriting standards are tightened.
To learn more about identifying and validating Less Discriminatory Algorithms, check out our FairPlay LDA Explainer video and download our accompanying E-book which provides a detailed guide to picking a fairer model.
Fair Lending Analysis
Identify and overcome tradeoffs between performance and disparity.