Watch FairPlay's latest on demand webinar: How to Choose a Less Discriminatory Alternative
Watch Now!

Does Your AI Model Know What It’s Talking About? Here’s One Way To Find Out.

Share this post

In Season 4 of the show Silicon Valley, Jian-Yang creates an app called SeeFood that uses an AI algorithm to identify any food it sees—but since the algorithm has only been trained on images of hot dogs, every food winds up being labeled “hot dog” or “not hot dog.

While Jian-Yang’s creation may seem absurd, in fact his app displays an intelligence that most AI models in use today do not: it only gives an answer that it knows is 100% accurate.

In real life, when you ask most machine learning algorithms a question, they are programmed to give you an answer, even when they are somewhat or entirely unqualified to do so. The data on which these models are trained may have nothing to do with the specific question being asked, but the model delivers an answer anyway — and as a result, that answer is often wrong. It’s as if SeeFood tried to identify every food based only on a knowledge of hot dogs.

This issue, known as “model overconfidence,” is a key reason why many AI deployments fail to meet their business objectives.


Contact us today to see how increasing your fairness can increase your bottom line.