For most of the past decade, public concerns about digital technology have focused on the potential abuse of personal data. People were uncomfortable with the way companies could track their movements online, often gathering credit card numbers, addresses, and other critical information. They found it creepy to be followed around the web by ads that had clearly been triggered by their idle searches, and they worried about identity theft and fraud.
Those concerns led to the passage of measures in the United States and Europe guaranteeing internet users some level of control over their personal data and images—most notably, the European Union’s 2018 General Data Protection Regulation (GDPR). Of course, those measures didn’t end the debate around companies’ use of personal data. Some argue that curbing it will hamper the economic performance of Europe and the United States relative to less restrictive countries, notably China, whose digital giants have thrived with the help of ready, lightly regulated access to personal information of all sorts. (Recently, however, the Chinese government has started to limit the digital firms’ freedom—as demonstrated by the large fines imposed on Alibaba.) Others point out that there’s plenty of evidence that tighter regulation has put smaller European companies at a considerable disadvantage to deeper-pocketed U.S. rivals such as Google and Amazon.
But the debate is entering a new phase. As companies increasingly embed artificial intelligence in their products, services, processes, and decision-making, attention is shifting to how data is used by the software—particularly by complex, evolving algorithms that might diagnose a cancer, drive a car, or approve a loan. The EU, which is again leading the way (in its 2020 white paper “On Artificial Intelligence—A European Approach to Excellence and Trust” and its 2021 proposal for an AI legal framework), considers regulation to be essential to the development of AI tools that consumers can trust.
What will all this mean for companies? We’ve been researching how to regulate AI algorithms and how to implement AI systems that are based on the key principles underlying the proposed regulatory frameworks, and we’ve been helping companies across industries launch and scale up AI-driven initiatives. In the following pages we draw on this work and that of other researchers to explore the three main challenges business leaders face as they integrate AI into their decision-making and processes while trying to ensure that it’s safe and trustworthy for customers. We also present a framework to guide executives through those tasks, drawing in part on concepts applied to the management of strategic risks.