In the short time since artificial intelligence hit the mainstream, its power to do the previously unimaginable is already clear. But along with that staggering potential comes the possibility of AIs being unpredictable, offensive, even dangerous. That possibility prompted Google CEO Sundar Pichai to tell employees that developing AI responsibly was a top company priority in 2024. Already we’ve seen tech giants like Meta, Apple, and Microsoft sign on to a U.S. government-led effort to advance responsible AI practices. The U.K. is also investing in creating tools to regulate AI—and so are many others, from the European Union to the World Health Organization and beyond.
This increased focus on the unique power of AI to behave in unexpected ways is already impacting how AI products are perceived, marketed, and adopted. No longer are firms touting their products using solely traditional measures of business success—like speed, scalability, and accuracy. They’re increasingly speaking about their products in terms of their behavior, which ultimately reflects their values. A selling point for products ranging from self-driving cars to smart home appliances is now how well they embody specific values, such as safety, dignity, fairness, harmlessness, and helpfulness.
In fact, as AI becomes embedded across more aspects of daily life, the values upon which its decisions and behaviors are based emerge as critical product features. As a result, ensuring that AI outcomes at all stages of use reflect certain values is not a cosmetic concern for companies: Value-alignment driving the behavior of AI products will significantly impact market acceptance, eventually market share, and ultimately company survival. Instilling the right values and exhibiting the right behaviors will increasingly become a source of differentiation and competitive advantage.
But how do companies go about updating their AI development to make sure their products and services behave as their creators intend them to? To help meet this challenge we have divided the most important transformation challenges into four categories, building on our recent work in Harvard Business Review. We also provide an overview of the frameworks, practices, and tools that executives can draw on to answer the question: How do you get your AI values right?