BCG Henderson Institute

Search
Generic filters

In the spring of 2020, when the COVID-19 pandemic erupted, policymakers hailed contact-tracing apps as one of the most promising ways that digital technologies could help control its spread. Governments could monitor people’s diagnoses and, later, their test results and locations partly through self-reporting; alert those infected as well as others with whom they had come in contact; and develop disease-spread scenarios to support decision-making by using apps and A.I.-powered platforms. Apple and Google even announced a historic joint effort to develop technology that health authorities could use to build the apps “with user privacy and security central to the design.”

However, contact tracing apps have enjoyed, at best, mixed success worldwide, and have been a failure in the U.S. They haven’t helped much for one key reason: People don’t trust companies and governments to collect, store, and analyze personal data, especially about their health and movements. Although the world’s digital giants developed them responsibly, and the technology works as it is meant to, the apps didn’t catch on because society wasn’t convinced that the benefits of using them were greater than the costs, even in pandemic times.

You don’t need a data-driven algorithm to conclude that A.I. generates as much fear as it does hope today. Despite everyday applications such as Siri and Alexa, most people, individually and collectively, are still worried about how business will use the technology. “Mark my words, A.I. is far more dangerous than nukes,” declared Tesla and SpaceX founder Elon Musk, four years ago.

Author(s)
Sources & Notes
Tags