BCG Henderson Institute

Search
Generic filters

In 2019, Apple’s credit card business came under fire for offering a woman one twentieth the credit limit offered to her husband. When she complained, Apple representatives reportedly told her, “I don’t know why, but I swear we’re not discriminating. It’s just the algorithm.”

Today, more and more decisions are made by opaque, unexplainable algorithms like this — often with similarly problematic results. From credit approvals to customized product or promotion recommendations to resume readers to fault detection for infrastructure maintenance, organizations across a wide range of industries are investing in automated tools whose decisions are often acted upon with little to no insight into how they are made.

This approach creates real risk. Research has shown that a lack of explainability is both one of executives’ most common concerns related to AI and has a substantial impact on users’ trust in and willingness to use AI products — not to mention their safety.

And yet, despite the downsides, many organizations continue to invest in these systems, because decision-makers assume that unexplainable algorithms are intrinsically superior to simpler, explainable ones. This perception is known as the accuracy-explainability tradeoff: Tech leaders have historically assumed that the better a human can understand an algorithm, the less accurate it will be.

Author(s)
Sources & Notes
Tags