The age of artificial intelligence has been full of predictions of mass technology-driven unemployment. A 2013 report by the Oxford Future of Humanity Institute posited that nearly half of U.S. employment at the time was “potentially automatable” over the next “decade or two.” A decade later, however, there were 17 million more jobs in the U.S.
The advance of generative AI has unsurprisingly breathed new life into such alarmist projections. The IMF recently declared 40% of jobs are “exposed” globally; Goldman Sachs put 300 million jobs at risk of being “lost or degraded”” and the Pew Research Center estimated that 19% of U.S. workers have jobs in the “most exposed to AI” category.
Are we on the cusp of a global employment apocalypse? Anxieties about “technological unemployment,” as John Maynard Keynes dubbed it in 1930, go way back. In the 1960s these fears led the U.S. government to convene a Commission on Technology, Automation, and Economic Progress, chaired by the eminent economist Robert Solow. Contrary to much fearmongering at the dawn of the IT revolution, the Commission concluded that “[t]echnology eliminates jobs, but not work.” So far, the facts have corroborated that thesis: The U.S. economy had 2.7 times as many jobs in 2024 as it did in 1964—with higher labor force participation (62.6% vs. 58.7%), lower unemployment (4% vs. 5.2%), and three times more output per hour worked. Over the last half-century, technological change didn’t eliminate work—it changed it.
But will this also be the case in the new age of AI? Nobody knows for certain. There are still too many unknowns to take forecasts of employment doom too seriously. Dissecting today’s “employment exposure” studies helps reveal the true extent of those uncertainties in the age of (especially generative) AI. Those uncertainties are the pace, extent, and depth of business adoption; the effect of higher labor productivity on the demand for services; and the timing and geographic distribution of potential job losses.