BCG Henderson Institute

Search
Generic filters

Who Watches the Algorithms? Technology and Strategies for Trust

As the exuberance around a new industry and its seemingly unlimited potential meets the inevitable reality of side effects, it is time for the tech industry to complement its focus on functionality with an equal emphasis on governance and trust

While the particulars of the recent accident involving an Uber self-driven vehicle are unique, a fatality was inevitable on some timescale. It raises an inevitable question: who watches the algorithms?

The power of artificial intelligence is that it is able to undertake complex, valuable tasks which have previously relied on human intelligence: reading x-rays, translating languages, adjusting flight paths, driving cars. As the number of such tasks where algorithms outperform humans grows, the bottleneck to wider application shifts from “can AI do it?”’ to “do we trust AI to do it?”, especially where there is a high downside to failure.

Artificial intelligence is the perfect example of a “trust good” — one where users cannot reasonably be expected to fully understand the product. At one time, many people could fix a car themselves. Now it is only mechanics who understand cars, and we have to trust them. In the future, even experts may not be able to understand exactly how the AI at the core of a self-driving car or aeroplane works in a particular case. While this may seem unnerving, it is not as unfamiliar as it sounds. Algorithms identify and exploit subtle, complex patterns in data and we see only the decisions they ‘serve up’. This is, in fact, exactly how we relate to our own brains every day. Our neural networks are opaque to us, and we trust them, even though we would be hard pressed to explain exactly why we decided to wait rather than act in a particular situation.

When the complexity of the product is beyond us, we often look at the incentives in the ‘game’ surrounding it. For example, we might look at whether the car repair shop has a fixed price tariff, to know whether we can trust that we have not been over charged. The right incentive structure might help us manage AI: if providers were remunerated according to the success or failure rate of the algorithm, then it would be reasonable to assume they are sincere in their efforts and representations. Nevertheless, incentives and sanctions will not solve the trust challenge entirely. Algorithms change as they learn from data in subtle ways which are not predictable to programmers, and it is difficult to disentangle the roles of data and algorithm. Even if providers are incentivized to supply best efforts, they are unlikely to be able to foresee the dynamics, and failures, of AI driven technology. And interestingly, even if technical progress leads to experts being able to interpret how algorithms reach decisions, we still have to decide how we know whether to we can trust the expert.

Sports games generate unforeseen situations in which questions arise about whether the rules have been followed. In such cases we appeal to an impartial referee. In economic decision making too, we routinely chose whether to trust complex goods by the presence of a credible referee. There are a number of ways in which this can happen. We can have a regulator who oversees compliance with regulations designed to protect our interests. We can have compulsory testing and licensing before deployment. We can empower private sector watch dogs, like ratings agencies and auditors. We can have independent experts assess the product. We can ensure a transparent and competitive market place such that, over time, natural selection occurs and product performance improves. And technology can be part of the solution too, by enhancing the possibilities for data logging, simulation, transparency, and diagnosis. Collectively we can refer to these choices as a “referee strategy”.

In recent decades there has been a push towards minimizing regulations as a presumed impediment to the efficient operation of markets. But the cost/benefit of regulations can be favorable as well as unfavorable. Especially for new, untried products, which have high costs of failure and are intrinsically hard to understand, regulations and other forms of refereeing are essential for the orderly development of markets and industries. Rules can create the transparency, credibly, stability and trust to allow new industries to flourish. They can also make an industry more economically attractive, not only by virtue of its growth and size, but also by discouraging “bottom feeders” who compete by relaxing standards, and raising margins by creating barriers to entry. Imagine what the pharmaceutical industry would look like if we were still selling “snake oil” based on claims which had not been substantiated in independent, double blind, placebo controlled trials. Would flying be the safest mode of travel without FAA regulations and strict independent crash investigation and their associated binding directives?

As the exuberance around a new industry and its seemingly unlimited potential meets the inevitable reality of side effects, unintended uses, downsides, accidents and abuses, it is time for the tech industry to complement its focus on functionality with an equal emphasis on governance, by asking the question, “What is our referee strategy?”

Author(s)
Sources & Notes
Tags