A simple idea with big consequences
To understand this pattern, researchers built a model that showed some companies had a small but meaningful chance of a transformative leap in productivity, a “jump.” Investing in new technology, research, or brand capacity raises the odds of such breakthroughs. In the data, a typical firm has a 1.6% annual probability of a productivity jump, but for the heavy-investing firms with lower profitability, that probability rises to about 4% — nearly three times higher.
When researchers simulated an economy where firms ignore this possibility and invest solely based on current returns, overall investment looks neater and more balanced but aggregate productivity declines.
The data also show that those firms experiencing large productivity jumps tend to peak during major technological transitions — most notably in the late 1990s during the internet boom, and again in recent years as digital and AI technologies have surged. These waves of heavy, risky investment often precede large gains in productivity.
What this means for the AI boom
This framework helps explain today’s AI investment wave. The scale of spending is enormous, and the payoffs are uncertain. Many companies experimenting with AI will never see a direct return. But a few will likely make breakthroughs in the form of new platforms, tools, or business models that transform entire industries. From an economy-wide perspective, that’s how technological revolutions unfold.
The study suggests the current divide — some firms spending aggressively on AI while others hang back — is not necessarily a sign of irrational exuberance. It’s part of the natural process of innovation. When a new technology has the potential to reshape production, broad and uneven investment is not a bug; it’s a feature.
Researchers argue that policymakers and investors should be cautious about declaring inefficiency too soon. Redirecting capital away from seemingly low-productivity firms might inadvertently choke off the experiments that lead to future breakthroughs. The lesson from history, reinforced by the study, is that some degree of “misallocation” is not only inevitable but essential for technological advancement.
The bigger picture
Over the past half-century, U.S. economic growth has often been preceded by waves of seemingly excessive investment: the computer revolution of the 1980s, the internet build-out of the 1990s, and the cloud-computing surge of the 2010s. Each wave produced failures and inefficiencies, but also created the infrastructure and knowledge base for the next era of productivity. AI appears to be following the same pattern.
The study’s takeaway is simple: An efficient economy must tolerate a little inefficiency in the short run. Firms that appear unproductive today may be planting the seeds of tomorrow’s breakthroughs.