BCG Henderson Institute

Search
Generic filters

How to Unlock Hidden Value from Generative AI

Prompt engineering is only the beginning—it’s time to start experimenting.

The release of ChatGPT in late 2022 is analogous to Mosaic’s launch nearly three decades prior. In 1993, the world’s first free web browser ushered in the internet revolution. New online distribution channels dramatically altered supply and monetization models across every industry. For example, digital downloads and subscription streaming services became the norm in media industries; companies such as Blockbuster and Tower Records that clung to old business models quickly found themselves in dire straits. As our colleagues Philip Evans and Thomas Wurster wrote, “The most focused of business models…can be blown to bits by new information technology.”

Today, ChatGPT is just the tip of the iceberg for a similarly dramatic revolution. Yet despite this promise, companies are optimizing the limited existing Generative AI functionality instead of pursing more long-term innovation. For example, they are investing in prompt engineering techniques, so much so that marketplaces like PromptBase have sprung up to commoditize these activities. Such investments may yield short-term gains, for example, in productivity. But they fail to embrace the current, accelerated rate of change—which is beginning to resemble a Moore’s Law for language models.

Moreover, while we know not everything will change, we learned from the internet revolution that how the world will transform is hard to predict. The market landscape for foundation models (for example, GPT3.5, which underpins applications like ChatGPT) and providers (such as OpenAI and Stability AI) is likely to change dramatically over the next 5 to 10 years. This is illustrated by the rapid change we see in the market today, with the rapid release of Google’s Bard and the significant investment into Generative AI startups. And we don’t yet know the broad potential of use cases for Generative AI and how they will impact business. Therefore, companies should experiment with the various ways each Generative AI approach can be used across their organization and in their offerings. These experiments should focus on discovering promising opportunities—with the objective of revisiting business models in the future.

In our research and client conversations, we are observing early signs of purposeful, corporate experimentation. For example, some companies have assigned a “special project” status to the exploration of Generative AI systems with an innovation sandbox as a first step. We also see companies creating cross-functional teams that span design, marketing, business, and technical functions to spark the collision of ideas. The most effective experimentation explores both business operations and new technologies.

How will your workforce be affected?

Although companies should not focus their experiments exclusively on productivity, they should conduct small-scale experiments within their own organizations to identify how their unique cultures will be affected and where they can maximize productivity gains. These experiments should not be limited to augmenting tasks with AI—instead incorporating alternative Human+AI collaboration modes to identify which modes optimize each task or function. Similar to traditional AI’s effects on teaming modes and interactions, Generative AI is likely to have an entirely new impact on team dynamics—requiring innovative solutions to create positive and productive work environments.

For example, GitHub conducted an experiment to assess how the use of its Copilot tool was impacting developer productivity and happiness. They found that 88% of developers felt more productive when using it, and 60 to 70% felt more fulfilled in their job. Some companies are forging their own path, experimenting with custom built solutions to improve productivity. Google designed its own IDE that predicted lines of code as developers were typing, reducing iteration time by 6%.

How can you create custom solutions for your company?

Some companies are developing custom foundation model applications by fine-tuning them with proprietary data. For example, consulting firms are experimenting with their proprietary data to centralize and democratize their extensive but distributed knowledge base. This ensures both that knowledge can remain in the company despite turnover and allows employees to access expert information without overwhelming specific individuals with requests.

The concept of storing knowledge in an interactive chatbot like ChatGPT will also have a dramatic impact on industries such as mining, in which many experts are retiring (and, at the same time, is struggling to attract new engineers). Jasper.ai is another example of a specialized application for marketing content generation built from fine-tuning with proprietary marketing data. Marketing and sales teams can use these tools to quickly create high-quality content. Because Microsoft plans to enable customization of ChatGPT, such applications will become easier to create. But companies also need to look beyond ChatGPT; many other LLMs also have useful capabilities, such as Anthropic, BERT, and Macaw.

Beyond existing proprietary data, companies should investigate ways to digitize specialized data for use with foundation models. In December 2022, NVIDIA unveiled BioNeMO, a foundation model designed to support molecular data representations, chemical structures, and more. Use of BioNeMO is expected to significantly reduce the drug discovery timeline. In fact, in January 2023, ProFluent demonstrated how it used a foundation model trained with 280 million protein sequences to create previously undiscovered enzymes, some of which had anti-microbial properties.

What creative approaches can you use to discover new capabilities?

Given that Generative AI is still in its infancy, companies must think outside the box to identify unique, or hidden, applications of Generative AI. Hidden capabilities (also known as capability overhang) can pose a serious threat if the right controls are not in place to manage unexpected usage effectively. However, companies that manage to identify unique capabilities, with the appropriate controls, will have a significant competitive advantage in the market.

One way to explore unique capabilities is by transforming available data into new modalities (for example, converting numerical data to visual data) to take advantage of the many existing pre-trained models. This enables companies to harness existing the capabilities of Generative AI applications and generate novel outputs without altering the underlying system. This method unlocks an entire world of possibilities simply by fine-tuning the foundation model with the transformed data.

For example, Riffusion took raw music data, transformed it into spectrograms (visual representations of musical features), and used the spectrograms to fine-tune Stable Diffusion. The result is a system that accepts text inputs (“Sunrise DJ set to hard synth solo”) to generate novel music compositions. An alternative example, where people are chaining together existing Generative AI systems and transforming data to offer novel solutions, is tome, which automatically generates slides from natural language inputs. However, these examples are only the beginning. Today, transformed data used outside of Generative AI for applications such as acoustic imaging to identify air leaks in compressed air systems. It is possible to imagine that such applications will become significantly more accurate and even enable real-time detection Generative AI.

Highlighting another unique capability of Generative AI, in material science, scientists have gone beyond traditional approaches of design (searching through known material structures to find the optimal properties for a given use case) and implemented the concept of inverse design. Here, the desired properties are designed first and then the foundation model is used to generate the chemical compounds that would have those properties; in other words, inverse design is property-to-structure instead of the traditional structure-to-property. While this approach has primarily been used in material science to date, with applications in aerospace, medicine, electronics, and more, other industries might learn from the idea of inverting design spaces by questioning assumptions about what can be designed versus determined.

The above examples allow companies to benefit directly from the financial, engineering, and R&D investments that went into training foundation models, such as those underpinning apps like DALL-E 2, ChatGPT, Stable Diffusion, and Midjourney, among others. In so doing, companies can be competitive without making significant investments in new models by opening a plane of fundamentally new uses and value ahead of competition.

Executives should work with their data engineers to identify creative ways to discover new solutions and assess which solutions are likely to bring the most value to the company. Start by asking:

  • Where do we have underutilized data that is critical for our business functions?
  • Can this data be easily used to fine-tune an existing foundation model?
  • What new modalities can we transform this data into to leverage existing systems?
  • Are there any design assumptions that we can invert?
  • What outputs do you expect and where in our organization could these outputs be utilized?

In a world where automated systems are increasingly being used to provide basic capabilities, company leaders should be innovative and strategic in their use of Generative AI. They also need to keep in mind that the ultimate outcome of experimentation is business model innovation. In the wake of the internet revolution, the previously familiar trade-off between richness or reach was eliminated, destroying traditional business models. In the not-so-distant future, Generative AI will render today’s commonly accepted business trade-offs obsolete. Competitive advantage awaits those that go beyond simple tweaking—and that identify business models not previously possible.

Author(s)
Sources & Notes
Tags