I was recently asked on a podcast: What AI risks keep you up at night?
My answer: how much time have you got?
I keep an eye on the AI incident tracker, a database of AIs gone off the rails—it’s enough to give anyone trouble sleeping.
As you read this, there are AI harms befalling people.
These include:
- Facial recognition where AI systems are incorrectly identifying consumers in stores as shoplifters or students taking exams as cheaters;
- Deep fakes being made that depict sexually explicit images of minors;
- Political dirty tricksters trying to spread misinformation;
But what truly keeps me up at night are the insidious, under-the-radar AI harms that profoundly impact people’s lives in subtle yet significant ways without drawing attention.
For example:
- In credit scoring: AI algorithms can perpetuate or exacerbate existing biases in financial data, leading to unfairly low credit scores for certain groups of people, particularly marginalized communities.
- In tenant screening: AI systems used by landlords and property management companies might unfairly disqualify potential tenants, affecting their ability to secure housing.
- In employment: AI-driven recruiting tools can inadvertently filter out qualified job candidates, affecting employment for many individuals.
- In healthcare: AI applications might not consider the nuances of diverse patient populations, potentially leading to misdiagnoses and unequal access to treatment.
AI is a mega-trend that’s here to stay and it has the potential to do amazing things.
But there are “quiet” AI harms happening all around us today.
While public discourse often focuses on big existential risks or widespread job displacement, we have to make sure these doomsday scenarios aren’t distracting from the immediate, real-world impacts of AI that are shaping people’s lives right now—often disproportionately at the expense of marginalized groups.