BCG Henderson Institute

Search
Generic filters

Two years ago, before Apple’s launch of the Apple Card, there was much discussion about how the no-fee credit card would enable the tech giant to storm into the financial services business. However, when people discuss the Apple Card today, it’s in part because of the glitches in Apple’s artificial intelligence algorithms that determine wannabe cardholders’ credit limits.

In November 2019,  a Dane tweeted that while his wife and he had both applied for the Apple Card with the same financial information, he was awarded a credit limit 20 times higher than that of his wife—even though, as he admitted, his wife had a higher credit score. Adding fuel to the fire, Apple’s cofounder, Steve Wozniak, claimed that the same thing had happened to his wife too. The card had been launched in August 2019, and it was estimated that there were 3.1 million Apple Card credit card holders in the U.S. at the beginning of 2020, so this issue may well have affected tens of thousands of women. A spate of complaints resulted in a New York Department of Financial Services investigation, which recently cleared Apple of gender-based discrimination, but only after the digital giant quietly raised wives’ credit limits to match those of their husbands.

As business sets about deploying A.I. at scale, the focus is increasingly shifting from the use of the technology to create and capture value to the inherent risks that A.I.-based systems entail. Watchdog bodies such as the Artificial Intelligence Incident Database have already documented hundreds of cases of A.I.-related complaints, ranging from the questionable scoring of students’ exams to the inappropriate use of algorithms in recruiting and the differential treatment of patients by health care systems. As a result, companies will soon have to comply with regulations in several countries that aim to ensure that A.I.-based systems are trustworthy, safe, robust, and fair. Once again, the European Union is leading the way, outlining a framework last year in its White Paper on Artificial Intelligence: A European Approach to Excellence and Trust, as well as its proposal for a legal framework in April 2021.

Companies must learn to tackle A.I. risks not only because it will be a regulatory requirement, but because stakeholders will expect them to do so. As many as 60% of executives reported that their organizations decided against working with A.I. service providers last year due to responsibility-related concerns, according to a recent Economist Intelligence Unit study. To effectively manage A.I., business must grasp the implications of regulations and social expectations on its use even while keeping in mind the technology’s unique characteristics, which we’ve discussed at length in our recent Harvard Business Review article. Indeed, figuring out how to balance the rewards from using A.I. with the risks could well prove to be a new, and sustainable, source of competitive advantage.

Author(s)
Sources & Notes
Tags