Does Your AI Model Know What It’s Talking About? Here’s One Way To Find Out.

Forbes Confidence Logo
by Melissa Lafsky
April 6, 2021 · 1 minute
by: Melissa Lafsky
Share this post
Forbes Confidence Logo

In Season 4 of the show Silicon Valley, Jian-Yang creates an app called SeeFood that uses an AI algorithm to identify any food it sees—but since the algorithm has only been trained on images of hot dogs, every food winds up being labeled “hot dog” or “not hot dog.

While Jian-Yang’s creation may seem absurd, in fact his app displays an intelligence that most AI models in use today do not: it only gives an answer that it knows is 100% accurate.

In real life, when you ask most machine learning algorithms a question, they are programmed to give you an answer, even when they are somewhat or entirely unqualified to do so. The data on which these models are trained may have nothing to do with the specific question being asked, but the model delivers an answer anyway — and as a result, that answer is often wrong. It’s as if SeeFood tried to identify every food based only on a knowledge of hot dogs.

This issue, known as “model overconfidence,” is a key reason why many AI deployments fail to meet their business objectives.


Sign Up for Our Newsletter

We cover the latest in financial regulation, compliance regulation and fair lending practices and trends.

This field is for validation purposes and should be left unchanged.