The AI Hype Cycle: Boom, Bust, or Breakthrough?
AI’s $1 Trillion Gold Rush: Lessons from Past Tech Booms
The world is currently experiencing a historic surge in AI investment. Tech giants and major players plan to invest around $1 trillion in AI infrastructure over the next few years, spanning data centers, specialized chips, and even power grid upgrades [1]. This “AI boom” rivals the greatest investment manias of the past, from railroads and electrification to the dot-com and mobile eras.
Most of the money is currently flowing into chips and infrastructure. But without real productivity gains at the application layer, all that spending could go to waste—at least in the short term. Investors are pouring money into a wave of AI SaaS startups racing to turn raw model power into practical, user-facing products. Most of these startups don’t build their own models; instead, they build on top of major players like OpenAI and Anthropic. Their value lies in the interface—wrapping the tech in a user experience that solves problems, from enterprise workflows to consumer apps.
Every major tech company is investing staggering amounts of capital to expand its infrastructure. Together, Microsoft, Amazon, Alphabet, and Meta plan to spend around $320 billion on AI technology in 2025. That number could balloon to around $1.7 trillion by 2035 if current trends continue.
By far, the United States is leading the AI race, at least from an investment perspective. In 2024, U.S. private investment in AI reached $109.1 billion—outpacing China by a factor of 12 and the U.K. by a factor of 24.
Data centers are becoming the new oil fields. Compute-specific chip spending alone could exceed $2 trillion by 2028, according to projections tied to Nvidia’s role in the hardware stack.
And yet, for all that capital, the monetization gap is staggering. An estimated 1.8 billion people use AI regularly, but only ~3% of users pay for premium AI services. Usage is ubiquitous. Revenue is not—or at least not yet.
Enterprise adoption tells a familiar story: AI is everywhere—78% of firms now deploy it in at least one function—but measurable returns lag. Only 1% of U.S. companies that invested in AI say they’ve scaled beyond pilots, while 43% remain in experimentation mode. The remaining ~56% most likely suffer from isolated deployments, paused efforts, partial rollouts, etc. Most companies report cost savings of less than 10% and revenue gains of under 5%, as summarized by The Wall Street Journal, citing findings from McKinsey and Stanford.
Looking back in history
A recurring pattern often shows up throughout history as a new disruptive technology enters the scene. Massive investment creates a boom, followed by a bust, which lays the foundation for transformative productivity gains years later.
Is this AI bonanza the largest tech investment cycle in history? And if it is, does it end in the usual bust—or does it finally deliver on the promise? To answer that, I looked back at some of the largest capex booms of the past and mapped them against what’s happening in AI right now.
We’ll also examine what happens if the AI boom is a bubble that pops – and how its fallout might compare to earlier episodes. As a designer, I’ll highlight how we can realize AI’s real value at the application layer—where design, user experience, and workflow integration will determine whether all this spending pays off in productivity gains.
Let’s start by looking backward: what can railway mania, the electricity boom, the dot-com bubble, and the mobile revolution teach us about today’s AI moment? Each of these past episodes saw exuberant investment chasing transformative technology – with strikingly similar dynamics. By examining their arcs, we can gain a deeper understanding of the rhymes of history repeating itself.
Railroad Mania: Laying Tracks to Boom and Bust
The 19th-century railway booms were among the most dramatic investment frenzies ever. In the 1840s, Britain experienced the original “Railway Mania,” as investors poured money into railroad companies in the hope of revolutionizing transport.
At its feverish peak, railway investment in the U.K. reached 7% of GDP—fully half of all investment in the economy [3]. In one year (1845), Parliament authorized roughly 3,000 miles of new track—as much as in the previous 15 years combined [3].
Even luminaries like Charles Darwin and the Brontë sisters got swept up in railway stock speculation [3]. This railway boom had all the hallmarks of a classic bubble: irrational exuberance, a rush of novice investors, easy money policies (the Bank of England had cut rates, fueling cheap credit), and companies making inflated claims about growth [3]. Shares of railway firms roughly doubled in a few years [3]. Railroads were a breakthrough technology that ultimately transformed the economy, but investors got ahead of the returns. Money flooded in before the business models caught up, and the gap between promise and profit became unsustainable. By 1847, the bubble burst. Many rail companies collapsed under financial strain, and share prices plummeted. Investors who jumped in at the peak took the biggest losses.
Importantly, however, the story doesn’t end with the crash. The railway bust bankrupted firms and speculators, but it left behind a vast new infrastructure—thousands of miles of track crisscrossing the country. In the ensuing decades, that rail network proved enormously valuable, drastically lowering the cost of moving goods and people. The social utility of railroads endured, even if the initial investors didn’t reap the rewards.
Economic historians estimate the railway manias of the 1840s and 1860s in Britain involved capital investment equal to 15–20% of annual GDP—on the order of what would be $3–4 trillion in today’s U.S. economy [2]. This mania wasn’t just frothy stock valuations; it was real money spent laying track and building engines. The immediate returns on that spending were poor (most railway stocks crashed), but the long-term impact on commerce and productivity was immense.
Railways didn’t just lay track; they laid the groundwork for how we model future value. To attract capital, these companies pioneered techniques we now take for granted: cost forecasting, amortization, network effects, even early versions of discounted cash flow. In many ways, the spreadsheet mindset that drives today’s AI investment frenzy—projections, burn rates, infrastructure bets—traces back not to Silicon Valley but to Victorian Britain. If you're interested in this period, this paper is one of the most cited academic analyses of the railway bubbles.
In review, a new general-purpose technology (rail transport) attracted massive upfront investment, a speculative bubble inflated and subsequently popped, and short-term investors lost substantial fortunes. Still, the economy eventually gained from the foundation investors had built. We’ll see this cycle play out again and again through different investment booms.
Electrification: Powering the 1920s (and a Stock Bubble)
Fast-forward to the early 20th century, another transformative technology—electric power—sparked a major investment boom. By the 1920s, electricity was spreading rapidly through households and industries. The number of electrified homes and factories was soaring, and new electric appliances (radios, refrigerators, irons, vacuum cleaners) were hitting the market [4]. Productivity was surging as manufacturers adopted electric motors and assembly lines. The U.S. economy’s total wealth more than doubled between 1920 and 1929 amid this technology-driven boom [4]. In the stock market, utility companies (which built power grids and electrical infrastructure) became the high-flyers of the late 1920s. For example, Radio Corporation of America (RCA), a leading radio and electronics firm, saw its stock value skyrocket based on very optimistic future expectations [4].

Contemporaries drew parallels between the 1920s “New Era” and the excitement over the internet many decades later. Indeed, “like the Internet boom of the late 1990s, the electricity boom of the 1920s fed a rapid expansion in the stock market,” writes economic historian Gene Smiley [4].
By 1929, speculation on electric utilities, radio, and other tech of the time had reached a frenzy. We all know how that ended: the 1929 stock market crash wiped out a massive chunk of market value, and the Great Depression followed. Inflated expectations of endless growth—driven by electrification and the rise of the automobile—fueled much of the 1920s stock boom. When reality caught up – growth slowed, interest rates rose, and profits didn’t justify prices – the bubble burst hard.
Yet, as with railroads, the underlying technology revolution continued its march after the crash. By 1930, nearly 90% of urban homes had access to electricity, and electric power fundamentally transformed American industry [4]. The productivity gains from electrification truly materialized in the 1930s and 1940s, even though investors in 1929 had overestimated how quickly those gains would translate into profits.
It took time for business processes to fully adapt to electricity – just as it took time for railroads to reorganize transportation or for the internet to reshape business – but once they did, the economy leaped forward. Unfortunately, early investors often don’t live to see the payoff. Technological revolutions tend to be over-financed in the short run and underappreciated in the long run.
A Cautionary Tale from the Internet Boom
No historical analogy looms larger over today’s AI frenzy than the dot-com boom and bust of the late 1990s. This boom was another “once-in-a-generation” tech wave: the advent of the consumer internet. As with railroads and electricity, investors poured in billions chasing what looked like a limitless opportunity.
In the five years leading up to 2000, venture capitalists and public markets threw money at anything with a “.com” in its name. The Nasdaq stock index rose nearly 400% between 1995 and its peak in March 2000, driven by soaring valuations of Internet companies. By late 1999, Morgan Stanley’s famous internet stock index had a combined market capitalization of $450 billion, while those 199 companies collectively had only about $21 billion in annual revenue and negative $6.2 billion in profits [5]. In other words, valuations were over 20 times revenues despite most firms losing money – a clear sign of “sky-high… valuations divorced from any sort of profitability or rationality” [5]. Companies with no viable business model were IPO’ing at billion-dollar valuations. It became a joke that if a startup were profitable, it would receive a lower valuation – investors only cared about growth and “changing the world,” not immediate economics [5].
In the spring of 2000, the bubble burst—suddenly and painfully. The Nasdaq plunged almost 80% from its peak (5,048 in March 2000) to its trough in October 2002 [5]. Hundreds of dot-com startups evaporated. By mid-2003, of the 7,000+ online companies launched in the late ’90s, about 4,800 had gone under or been acquired in distress [5]. Some of the biggest names fell spectacularly (eToys, Webvan, Pets.com), and even survivors like Amazon and Priceline saw their stock prices crater over 90% before eventually recovering years later [5]. All told, the crash obliterated an almost unfathomable amount of paper wealth. By one estimate, $5 trillion in stock market value vanished between 2000 and 2002, hitting the nest eggs of over 100 million investors [5]. It was a bloodbath for portfolios and caused a mild recession in 2001.
And yet – here comes the refrain – the infrastructure laid during the boom set the stage for long-term success. During the 1990s, telecom companies had wildly overbuilt fiber-optic networks and data centers, far ahead of demand. Much of that fiber sat “dark” after 2000 when firms like Global Crossing and WorldCom went bankrupt. However, in the 2000s, that excess capacity enabled the advent of the broadband and mobile internet era. Likewise, the dot-com bust cleared out weak players but did not halt the internet’s growth – it was only momentarily “too much, too soon.” By 2005, e-commerce and digital advertising were thriving again (eBay, Amazon, and Google had risen from the ashes). By the 2010s, the internet had “changed the world” as promised – just on a slower timetable and under different leadership than early investors expected.
The dot-com saga also included a parallel boom in telecommunications (the “telecoms bubble”) closely related to the internet buildout. Between 1996 and 2001, in anticipation of the impending explosion of internet traffic, telecom companies invested more than $500 billion (often funded by heavy debt) in laying fiber cables, installing network switches, and expanding wireless (mobile phone) networks [7]. Governments even got involved, auctioning off new 3G wireless spectrum licenses for eye-popping sums. In the U.K., a 2000 auction raised £22.5 billion for 3G licenses, and Germany’s 3G auction raised about £30 billion (roughly $45 billion) the same year [7]. These auctions – amounting to around 2% of GDP in those countries – left European telecom providers deeply indebted [7]. When the broader tech bubble burst, many telecom firms found themselves unable to support the debt loads. The result was a crash in telecom stocks and corporate bond defaults in the early 2000s. The industry’s collapse was so severe it’s been called “the biggest and fastest rise and fall in business history” [7].
WorldCom (a major long-distance carrier) went bankrupt in 2002 in one of the largest bankruptcies ever, and equipment makers like Lucent and Nortel imploded. The telecom sector lost about $700 billion in market value in two years and ended up with roughly $1 trillion in debt on the books – “much of which [would] never be repaid,” according to the FCC’s chairman at the time [7]. Bond investors ultimately recovered barely 20 cents on the dollar from those telecom debts [7].
Once again, however, the over-investment had an upside: the fiber-optic backbone built in the late ’90s meant that in the 2000s and 2010s, the marginal cost of transmitting data plummeted. All that infrastructure laid by now-defunct companies became the springboard for cheap global communications – enabling streaming video, social media, cloud computing, and mobile AI applications today. Similarly, the exorbitant 3G spectrum licenses eventually yielded 3G/4G networks that enabled smartphones to be in everyone’s pocket.
The mobile revolution
The mobile boom of the 2010s – the explosive growth of smartphones and mobile apps – rode on the rails (or rather, wireless towers) financed a decade earlier during the telecom bubble. By the mid-2010s, mobile computing had indeed become ubiquitous: there were more active mobile devices on the planet than humans, with mobile penetration exceeding 100% of the population in many developed countries [8]. By 2014, mobile devices and apps had overtaken desktops, making up the majority of internet usage in the United States (55% of usage vs 45% on PCs) [8].
The mobile revolution created trillion-dollar companies and entirely new industries (apps, ride-sharing, mobile gaming). But it required the heavy lifting of building networks and an ecosystem – much of which was done during the late ‘90s/early ‘00s bubble era, at significant cost to investors at the time.
The common thread in these historical cases is clear. Each transformative technology – railroads, electricity, the internet, mobile – underwent an initial investment surge that outpaced the technology’s short-term maturity or adoption. Investors, driven by fear of missing out and often cheap credit, financed rapid expansion and accepted lofty valuations on the promise of future gains. In the short run, reality couldn’t keep up: bubbles formed and then burst, causing economic pain and capital destruction (trillions lost in crashes, bankruptcies, layoffs, etc.).
However, the legacy of infrastructure and innovation laid down during the boom years became the platform for long-term growth and productivity improvements that eventually justified the technology. Essentially, society got the benefits, but not always the initial investors. As wry observers note, in gold rushes, it’s often the sellers of “picks and shovels” who profit first. Many early tech booms primarily enriched the suppliers (equipment makers, intermediaries, speculators) before the technology fully delivered value.
With this context, let’s turn back to the present: Where does the AI boom fit into this historical continuum?
The AI Spending Surge in Context: Biggest Boom Yet?
Since around 2023, we have entered what can only be described as an AI investment bonanza. The rise of powerful generative AI models (like OpenAI’s GPT-4 and Google’s Bard) sparked fierce competition between tech giants and startups to pour resources into developing AI capabilities. Consider some numbers:
$1 Trillion and counting: Analysts estimate that cumulative AI-related capital expenditures by big tech companies and others will exceed $1 trillion in the coming years [1]. This amount includes expenditures on advanced chips (GPUs), expanded cloud data centers, and supporting infrastructure for training and running AI models. For comparison, this level of spending in absolute dollars appears to be unprecedented in the history of technology. During the dot-com era, annual tech capital expenditures were significant but not to this level. Even globally, $1T is a substantial chunk of change – though as a share of GDP, it may be on the order of ~1% per year for a few years, which, while huge, might not exceed the 15–20% of GDP frenzy of the 1840s rail mania [2]. In any case, AI is attracting massive investment.
Venture capital flood: Venture funding for AI startups has exploded. In 2024, global VC investment in AI companies exceeded $100 billion, up ~80% from ~$55.6B in 2023 [9]. AI-related companies attracted about one-third of all venture funding worldwide in 2024, making AI the #1 space for venture investment [9]. Drilling down, the hottest subset is generative AI startups – those building AI models or AI-driven software. AI funding surged from approximately $24 billion in 2023 to around $45 billion in 2024, nearly doubling in just one year. [9]. Early-stage deal sizes have ballooned; late-stage generative AI rounds averaged over $300M in 2024 (versus ~$50M a year before) [9]. This is reminiscent of 1999’s dot-com VC frenzy, though on arguably an even larger scale when adjusted for inflation. By one count, the number of companies working on generative AI jumped from about 50,000 in late 2023 to over 67,000 by early 2025, and could approach 100,000 globally by the end of 2025.
Corporate & cloud capex: It’s not just startups. The “hyperscalers” – companies like Google, Amazon, and Microsoft – are drastically ramping up spending to accommodate AI. These firms are investing in everything from specialized AI chips (e.g., NVIDIA GPUs or in-house chips) to building new data centers to expanding network capacity. Microsoft’s CEO, for instance, has spoken of a massive spend to integrate AI across Azure cloud and Microsoft’s products. One analysis noted that hundreds of billions of dollars in annual capital expenditures by hyperscalers and even government programs (for example, national AI initiatives) are “fueling the euphoria” and essentially demonstrating faith that all this AI infrastructure will be put to good use [12]. It’s a classic build-it-and-they-will-come mentality—not unlike the overbuilding of fiber networks before demand materialized. Semiconductor firms are huge beneficiaries right now: NVIDIA, the leading maker of AI GPUs, saw its market value soar to over $1 trillion in 2023 on surging demand. Notably, NVIDIA’s stock then sharply corrected by mid-2024 amid volatility [11] and then hit another all-time high, followed by ~40% drawdown, and then reached another all-time high, a reminder that even the picks-and-shovels sellers can experience whiplash. At the time of writing, NVIDIA is near an all-time high, with a market capitalization of $3.85 trillion.
Enterprise AI adoption and budgets: Even traditional companies (outside of tech) are reshuffling IT budgets to focus on AI. Surveys of CIOs show that while general IT spending is under pressure, budgets for AI projects are growing. In one late-2024 survey of 600 IT leaders, 83% expected to increase spending on AI in 2025 (most by double digits), and only 5% planned any pullback [12]. Another report found enterprise buyers spent $13.8 billion on generative AI in 2024, a 6x increase from the prior year [10]. Specifically, about $4.6 billion of that went to AI software applications for enterprises – an almost 8x jump from the $600 million the year before [10]. These are astonishing year-over-year growth rates in enterprise tech investment, signaling that even in a tight macro environment, companies are redirecting funds aggressively into AI. In 2024, 40% of enterprises' generative AI spending came from reallocating existing tech budgets (not just “innovation labs”) [10], indicating that AI is transitioning from a novel experiment to a core budget item.
AI is attracting capital at a rapid pace across startups, incumbents, and infrastructure providers. So, where are we in this cycle? Are we in a bubble? If so, what happens when it pops? Or, if not, is the spending justified?
Bubble, Bust, or New Paradigm? Where We Are in the AI Cycle
There is a lively debate among experts about whether the AI boom is ahead of itself (a bubble) or a rational response to a game-changing opportunity – essentially, how much “hype” vs “substance” we have today. It’s reminiscent of debates during past booms (“The internet will change everything” vs. “It’s all hype, look at the overvaluations!”). Let’s explore both angles.
On one side, skeptics argue that AI’s tangible benefits so far fall far short of the massive investment. A June 2024 analysis by Goldman Sachs titled “Gen AI: Too Much Spend, Too Little Benefit?” encapsulated this caution [1]. Thus far, the direct economic payoff from generative AI has been modest – mostly pilot projects and efficiency gains in narrow areas (like code autocompletion for software developers). One economist, MIT’s Daron Acemoglu, estimates that given current capabilities, AI will only impact about 5% of all U.S. work tasks in the next decade, translating to a mere 0.5% boost in productivity and a 0.9% increase in GDP over 10 years [11]. Those figures are cumulative, not per year – in other words, essentially a rounding error on overall economic growth. Acemoglu’s skepticism stems from the view that today’s AI can automate “only a quarter of the tasks it’s technically exposed to, and those tasks are a small fraction of total work” [11]. Many complex or physical tasks (such as driving, manufacturing, and healthcare procedures) remain beyond AI’s reach for now, and even in digital domains like customer service and writing, AI often requires human oversight.
Furthermore, skeptics point out AI is exceptionally expensive to deploy at scale – both in terms of computing costs (specialized hardware, electricity) and the engineering effort required to integrate AI into products. Jim Covello, Goldman’s head of equity research, noted that unlike the early internet, which rapidly scaled low-cost solutions (think: distributing news or ads online at a much lower cost than via print), current AI doesn’t reduce unit costs for most problems [11]. AI often adds new costs (cloud bills can skyrocket when a company starts using large language model APIs heavily). Covello argued that to justify the ~$1T spending, AI will need to solve truly complex, high-value problems better or cheaper than existing methods—and it’s not there yet for many use cases [11]. He even questioned whether the nature of today’s AI (statistical models trained on historical data) could replicate the most valuable human capabilities, such as judgment and creativity, beyond merely remixing existing patterns or common-sense understanding [11]. In Covello’s view, we might be in an AI bubble that has formed before the technology has proven its worth—and he cautions that if history is any guide, such bubbles “can take a long time to burst” even if the fundamental story doesn’t pan out [11].
On the other side, optimists contend that we’re only in the first inning of a decades-long AI transformation that will eventually justify the investment and that current spending is not as irrational as it seems. Goldman’s economists, Joseph Briggs and others offered a far rosier outlook than Acemoglu’s: they project that AI could ultimately automate 25% of all work tasks, boosting productivity by 9% and raising GDP by 6% over the next decade [11]. While even 6% cumulative GDP over 10 years isn’t earth-shattering, it’s a significant add-on (roughly an extra 0.6% growth per year) that could justify a lot of investment.
Optimists argue that the costs of AI will decline (as with past tech, where economies of scale and innovation brought down unit costs), and new applications will emerge that generate revenue. A telling point from tech analysts: despite Big Tech’s substantial AI spending, they don’t yet see obvious “irrational exuberance” in the market [11]. For example, many leading AI firms (the picks-and-shovels like NVIDIA or platform companies like Microsoft) have seen their stock rise commensurate with actual revenue gains from AI, not purely speculative frenzy. Some note that capital spending as a share of company revenues in this AI cycle isn’t dramatically higher than in prior cycles [11]—implying that today’s giants are investing in AI roughly in line with their size, not betting the farm. In fact, unlike the dot-com era, where countless barely-experienced startups vied for internet riches, well-funded tech incumbents with deep pockets, cheap capital, and expansive distribution networks are driving the AI boom [11]. Companies like Google, Microsoft, Amazon, and Meta are driving a lot of AI investment, and they can afford to play the long game.
According to Kash Rangan, a software analyst, this makes the current cycle more promising than previous ones: the key players have deep pockets and existing customer bases, which might help AI avoid the fate of past bubbles that were built on shaky business models [11]. Optimists also point out that we haven’t yet seen the “killer app” of the AI era—the equivalent of the web browser for the internet or the iPhone for mobile—and when it arrives, it could unlock massive value [11]. Maybe it will be AI assistants that radically enhance knowledge work or AI-driven scientific discoveries that create new markets.
Despite the staggering sums already invested—it’s possible we’re still underestimating what’s coming.
If you strip away the hype and look closely at what today’s top models are beginning to unlock, it’s not just faster autocomplete or flashy chatbots. It’s something more foundational: a new computing primitive. These models are evolving into reasoning engines—capable of understanding complex queries, synthesizing information across domains, and making decisions with context. That’s a level of functionality we’ve never had in software before. It's as if we’ve discovered a new kind of processor—not for bits and bytes, but for judgment and thought.
If that’s true, then infrastructure spending may only look excessive in the absence of killer apps. Once AI begins to inhabit real workflows—helping workers reason, generate, decide, and act—the leverage could be immense. But we won’t realize any of it unless we design systems that bridge that gap. Infrastructure can’t justify itself. It needs an interface. This issue is where product design becomes pivotal.
Applications that bolt AI onto existing tools will underwhelm. The real unlock comes from reimagining the software experience itself: tightly integrated, context-aware, and invisibly helpful. Most users don’t care about the model behind the curtain. They care whether the system saves them time, reduces complexity, and gets them closer to their goal. In that sense, the future of AI is less about the model—and more about the UX.
If AI is the next great platform shift, then we’re not in a bubble. We’re in a moment of architectural transition—one that requires not just capital but creativity and courage from those building the application layer.
The bottom line for the bullish camp: yes, spending is high, but AI is a once-in-a-generation technology that warrants significant investment, and the current trajectory (with more measured investor rewards for real results) suggests we might not simply repeat the dot-com bust scenario.
So, where are we now—beginning, middle, or end of the cycle?
It’s impossible to pinpoint in real-time, but there are clues. Some metrics suggest we are in the early to mid-stage of the hype cycle. The astonishing growth rates in funding and enterprise adoption in 2023–24 indicate that we went from 0 to 60 very quickly – arguably too quickly. Yet, unlike the stock market in late 1999 (which was blatantly in a valuation bubble by any traditional measure), the stock market today is not valuing every AI company at absurd multiples. The fever has been more concentrated in private funding and specific pockets (like anything related to generative AI). There are signs of a slight sentiment cooldown: venture investors are becoming a bit more selective, and the initial novelty of ChatGPT has given way to pragmatism. For instance, some surveys indicate that CIOs are shifting from pure experimentation to more cautious pilots with clearer ROI expectations [10].
This pragmatism could imply the “middle” of the cycle—the point where wild enthusiasm meets the reality of implementation challenges. However, other indicators scream that we’re still near the beginning of the S-curve for adoption. A majority of organizations are currently prototyping or implementing AI solutions; only a small minority have rolled out AI into broad production use [10]. In that Menlo Ventures survey, more than one-third of enterprise leaders admitted they “do not have a clear vision” yet for how generative AI will be used in their business [10] – despite spending surging 6x! This underscores that we are in a phase of experimentation and discovery. It’s akin to the internet circa 1996: lots of excitement and early products, but many companies still figuring out basic strategy.
One could say the hype (media attention, funding) is running a bit ahead of the clarity (proven business models, stable best practices) – which is typical of a beginning-to-mid cycle. History suggests that if this is a bubble, we may not have reached the peak yet. Bubbles often crescendo when almost everyone believes the hype and huge profits are being claimed by new entrants (think: late 1840s for railways, 1929 for electric stocks, 1999 for dot-coms). In AI, we’re seeing very high valuations and massive funding, but also a fair amount of healthy skepticism and oversight. Regulators are talking about AI risks; enterprises are concerned about issues like data privacy and model accuracy.
One expert insight from the Goldman report stands out: even if AI’s fundamentals disappoint, the AI theme could run for a while before any bust—because “bubbles take a long time to burst.” In the meantime, the picks-and-shovels suppliers (chipmakers, cloud providers) can continue to see strong demand [11].
In other words, this could be a prolonged buildout phase. If we use the dot-com analogy, maybe we are in 1997 or 1998 equivalent: the world is convinced AI is the next big thing, money is flooding in, some early successes (and excesses) are visible, but the full reckoning—if it comes—is a bit further out after more deployment. Alternatively, perhaps the cycle will be milder—a mini-bubble that deflates gradually as improvements catch up.
What if the AI bubble pops?
Let’s consider a scenario: say, late 2025 or 2026 brings a wave of AI startup failures, a realization that many pilot projects aren’t yielding ROI, and investors pull back sharply. How painful would a bursting AI bubble be for the economy?
It could certainly sting, especially in the tech sector. Thousands of AI startups going bust would mean wasted capital and layoffs of highly skilled workers (though those engineers might be quickly absorbed elsewhere, given the ongoing digital talent shortage). Big tech firms might have to write down billions spent on under-utilized AI hardware (Imagine failed projects leaving data centers packed with AI chips running far below capacity). We might see some high-profile implosions – perhaps an overhyped consumer AI app that raised $500M and then shuts down or a cloud provider that over-provisioned capacity. Public market investors could lose if AI-driven stocks retreat (for instance, if AI optimism in Nasdaq constituents faded, it could drag indexes).
However, looking at past busts, the broader economy would likely withstand it. The dot-com crash contributed to a mild U.S. recession in 2001, but it was not catastrophic to the overall economy – partly because physical and human capital didn’t vanish; instead, it was reallocated. Similarly, if an AI bubble popped, we’d expect a period of consolidation: stronger players (likely the big incumbents) would acquire useful tech from failed startups on the cheap, excess data center capacity might cause prices for cloud computing to drop (benefiting other industries), and venture funding would shift to a new hot area (or back to fundamentals).
The immediate aftermath could see a pullback in related areas – for example, less demand for new semiconductor fabs if AI chip orders dry up, which would hurt that industry for a while. There might even be a stock market correction if AI had been propping up valuations. But unless the AI boom grows to truly systemic levels (say, if overall business investment had become utterly dependent on AI projects), the economy would adjust. Importantly, we’d walk away with a wealth of AI infrastructure and research—ready to leverage in the next cycle.
That might be the most likely outcome if a bubble pops: a couple of “AI winters” years, where the hype dies down, some disillusionment sets in, and only the most valuable AI applications continue to grow. Then, down the line—perhaps a few years later—a second wave of AI breakthroughs finally delivers the undeniable “killer apps” and productivity gains, building on all that infrastructure. In other words, the cycle of over-expectation, crash, and eventual real impact seen with railroads, electricity, and the internet could replay.
One wildcard: AI, especially generative AI, has captured the public imagination in a way few technologies have. If a crash occurs, it may dent public trust or interest in AI for a time, similar to how people became somewhat cynical about “internet companies” after the dot-com bust. Enterprises might also retrench—e.g., cutting budgets for experimental AI projects—until proven winners emerge. But in the long run, no one wants to be left behind if AI truly is transformative. So, even a burst bubble would likely just reset the playing field, rather than ending the game.
It’s also worth noting that some analysts believe the upside scenario for AI (if it meets optimistic projections) is not even fully priced into markets yet. If AI truly boosts global productivity significantly without significant downsides, it could support higher corporate earnings and economic growth in the 2030s [11]. In that scenario, current investments would yield great returns over time, and we might not see a severe bust at all – just some volatility along the way. That’s the techno-optimist dream: AI as a true general-purpose technology that starts slow but then triggers a sustained economic boom (like electricity and the internal combustion engine did in the mid-20th century).
In summary, if there is a bubble, we are likely in the middle of the hype cycle: beyond the initial spark, racing up the “peak of inflated expectations,” but perhaps not yet at peak insanity. The spending is enormous but could grow even more before plateauing. Whether it’s justified will depend on AI’s progress in the next few years. Short-term ROI is questionable in many areas—which is precisely why some fear a bubble—but long-term ROI could be tremendous if (a big “if”) AI achieves even a fraction of what its proponents hope.
This point leads us to a crucial aspect of the AI boom that will determine its ultimate fate: the application layer. Just as in past tech cycles, building infrastructure (such as rail tracks, electric lines, and fiber-optic cables) was only half the story—the other half was developing applications and services on top that deliver value to end-users. Right now, much of the AI spend has been on the “picks and shovels” – chips, cloud platforms, foundational models. The hope is that this lays the groundwork for massive productivity gains at the application layer.
Let’s explore that and why design and user experience will be make-or-break for the AI revolution’s ROI.
From Picks & Shovels to Killer Apps: Where the Value Will Emerge
At present, we’re effectively in the infrastructure phase of AI adoption. Billions are being spent on model development (e.g., training ever-larger GPT-type models), on hardware (GPUs, TPUs, etc.), and on integrating AI capabilities into existing tech platforms. This phase is analogous to laying transcontinental railroad tracks or stringing high-voltage transmission lines—necessary precursors to transformation but not sufficient on their own to change day-to-day life.
The next phase, where AI truly proves its worth, must happen in the application layer: the concrete tools and solutions that businesses and individuals use in their workflows, powered by that AI infrastructure. Encouragingly, there are signs we’re transitioning into that phase. In 2024, enterprise investment in AI applications (as opposed to AI infrastructure or foundational R&D) surged dramatically [10]. Companies are beginning to deploy AI-powered software for specific use cases—think AI copilots for coding, customer service chatbots, and AI assistants for sales teams, among others.
The most popular generative AI use cases in enterprises so far have been those that directly augment worker productivity, such as coding assistants, knowledge search, and content summarization [10]. For example, GitHub Copilot (an AI pair programmer) reached a $300 million annual revenue run rate within a couple of years of launch, validating that developers find it valuable enough to pay for it [10]. AI customer support agents that can handle basic queries 24/7 are also gaining traction (with over 30% of enterprises piloting or using them) [10]. These early application successes hint at the productivity improvements AI can bring: faster coding, quicker customer issue resolution, reduced time spent searching documents, etc.
Aggregate data supports this finding: in one study, customer support representatives using a generative AI assistant handled 14% more issues per hour than those without, effectively boosting productivity [12]. Yet, for all these green shoots, significant challenges remain in turning AI from a cool demo into a reliable workhorse in the enterprise. As a designer focused on how humans interact with technology, I want to emphasize some key factors that will determine whether AI applications truly deliver value or end up as abandoned experiments:
User Adoption & Workflow Integration: No matter how powerful an AI tool is, if end-users (employees, consumers) don’t adopt it in their daily workflow, it creates no value. One significant hurdle is that people are reluctant to add yet another app or interface to their already crowded workflow, especially if it initially slows them down or complicates things.
New AI applications must be delightful to use, immediately helpful, and seamlessly fit into existing processes. This is a design and UX challenge. If an AI requires too many steps, or if it’s parked in a separate dashboard that the user has to remember to visit, it may be ignored after the initial novelty. Successful AI apps will likely be those that embed into the tools people already use (e.g., an AI assistant inside your email client or project management software rather than a separate portal) or those so intuitive and useful that they become the new home screen.
Trust and Accuracy (the Hallucination Problem): Many generative AI tools have a well-known flaw—they sometimes “hallucinate” incorrect or nonsensical outputs. In high-stakes or even moderate-stakes workflows, this is a huge barrier to trust. If a sales analyst uses an AI tool to generate a client report, but the tool occasionally fabricates a statistic, the analyst will quickly lose trust and revert to manual work. Building user trust in AI outputs is paramount.
This can be addressed by design and process: for instance, AI answers can include source citations or confidence levels (so the user can verify), UIs can encourage user feedback/corrections to involve humans in the loop, and the scope of the AI can be constrained to domains where it’s known to perform well. Another approach is what I call “earned automation”: initially, show the user what the AI suggests side-by-side with how it reached that suggestion (like highlighting the parts of a document that led to a summary)—over time, as the user sees it’s consistently correct, they’ll trust it enough to let it fully automate the task.
Many users rightfully treat AI as an “assistant with a big imagination” – helpful for drafts and ideas but needing supervision. If we don’t solve the reliability issue, many AI pilots will stall. A recent survey of enterprise AI projects found that “hallucinations” and technical errors were a top reason (15% of cases) for AI pilot failures [13]. Users simply wouldn’t adopt systems that gave wrong answers. Therefore, designers and developers must prioritize accuracy, transparency, and user control to overcome this challenge. Perhaps counterintuitively, sometimes the best design is to slow down the AI—e.g., requiring a confirmation click before the AI executes an action—so users feel safe. It’s better for adoption to start cautious and then gradually automate more as trust builds.
Delighting the End User: One insight from past enterprise software waves is that tools succeed when they make life easier for the end-users, not just promise benefits to the company. A classic mistake is to push a tool that management thinks will increase productivity, but workers find it cumbersome—they will quietly find workarounds or underutilize it. With AI, there’s a risk of this if, say, companies implement an AI system mainly to cut costs or monitor work without considering the frontline employee experience.
AI apps need to provide a clear “what’s in it for me?” to users. For designers, this means focusing on user-centered design: observe how an employee does a task today, identify pain points, and insert AI in a way that feels like magic by removing those pain points. The user shouldn’t have to configure complex settings or parse AI jargon; the best AI apps hide the complexity under the hood.
When an AI tool truly delights—e.g., it saves an hour of drudgery with one button or produces a draft report that would have taken you half a day—users will champion it. We saw this with good design in consumer tech (think how the iPhone’s delightful UX accelerated mobile adoption). In enterprise AI, a delightful experience could be something like an AI project management assistant that automatically drafts status updates and gently reminds team members of overdue tasks in a friendly tone—acting like a competent team administrator that everyone loves.
Avoiding Assumption-Stacking: When building AI products, there’s a temptation to make numerous assumptions about user needs or behavior. But user research is as critical as ever. A major pitfall is designing features based on what we assume the user will do and then layering more features on top of that without ever validating the initial assumption.
This assumption-stacking can lead to AI products that technically work but solve a problem users don’t have, or that users can’t figure out how to trigger. For example, an AI analytics tool might assume users want fully automated insights every morning—but maybe users actually want a better query interface and still prefer to explore data themselves. If the tool goes all-in on auto-generating reports (and then adds auto-scheduling those reports), it might miss the mark.
Good product design will involve iterative testing with real users: Do they understand what the AI is offering? Where do they get stuck? Often, it’s “little things”—perhaps the terminology is confusing (“What does ‘Knowledge Graph Suggestions’ mean?”), or a required input is too hard to find, causing users to drop off before they reach the AI’s wow moment. These are the things that builders might take for granted but trip up users early, preventing adoption.
In AI, there’s also the issue of overwhelming the user with too many possibilities (“It can do anything; what do you want it to do?”). Good design might instead present a few clear, contextual actions the AI can help with at each point in the user’s journey.
Delivering Real ROI (Short-Term and Long-Term): Enterprises right now are both excited and a bit anxious about AI. They are tightening overall software spending in many cases (given economic uncertainties), yet increasing spend on AI – implying they expect AI to deliver higher ROI than other areas [12], [14]. This means the bar is high: after the initial honeymoon, AI solutions will need to show real productivity gains or business value.
In the short term, that might yield modest efficiencies—e.g., an AI system that reduces customer support handling time by 20% or an AI-enhanced CRM that leads to a 10% faster sales cycle. In the long term, the promise is larger—maybe AI systems will enable entirely new revenue streams or dramatically lower operating costs in certain functions. But getting there will require focusing on the application layer.
For all the talk of “AI will disrupt every industry,” it will only happen if the technology is applied via products that industry professionals can practically use. Every SaaS vertical—from HR software to supply chain management—is now seeing AI-first challengers emerge. They often integrate features like natural language query (“Ask the software in plain English for any report”) or AI predictions (“get AI to forecast your inventory needs”). Incumbents, in turn, are racing to bolt on AI features to their established products.
The competitive landscape will be intense, and design/UX could be a differentiator. Users will choose the tools that not only have AI, but have AI that makes their day easier. We’re already observing that, after a first wave of incumbent products labeling themselves as “AI,” many enterprises are not fully satisfied—about 40% of decision-makers in one survey felt that their current software vendors’ AI features didn’t truly meet their needs [10]. This opens the door for well-designed AI-native apps to gain ground.
As a designer, my advice is that simply layering AI on top of an old workflow is often less effective than rethinking the workflow with AI at the center.
Will AI’s Boom Pay Off?
The pattern of boom-then-bust-then-real-impact appears durable. We likely are in an AI bubble of sorts – meaning a phase where investment outpaces immediately justifiable returns. But “bubble” isn’t necessarily bad; it can be the incubation phase for something great as long as one survives the shakeout. The critical question is how long the bubble lasts and how sharp the bust will be if it comes.
Policymakers and business leaders should learn from the past: when the dot-com bubble burst, it was painful but short-lived, and the excess capacity built turned out to be a boon. Similarly, if AI spending overshoots, the silver lining is that we’ll have advanced the field dramatically—encompassing more powerful models, better infrastructure, and broader awareness—which paves the way for the productive phase. A bubble can effectively subsidize innovation and deployment that might not happen under a more cautious approach.
For individual businesses, the lesson is to be neither blindly euphoric nor cynically dismissive. Those who ignored the internet in 1999 because “it’s a bubble” found themselves behind competitors by 2005. Those who invested heavily in every dot-com idea without regard for business fundamentals often went bankrupt by 2001. The winners were those who invested strategically, adapted to new tech, but also managed risks.
In AI, that means embracing it where it can enhance your operations or product but demanding evidence of value. It means running pilot projects to learn (yes, you might waste a bit of money on some experiments—consider it tuition) but being willing to pivot or scrap what doesn’t work. It also means investing in the human side – training your workforce to use AI tools, updating processes, and addressing cultural resistance – because productivity gains only come when people use the tech effectively.
From a macro perspective, are AI investments justified? In the short term, probably only a small fraction are yielding positive ROI. Many companies are spending on AI out of fear of missing out or a belief that “we’ll find the benefit later.” This is normal in the early stages—think of it as planting seeds. Not all seeds will sprout, but you plant anyway because the harvest could be valuable.
In the long term, if AI fulfills even a portion of its potential (in drug discovery, climate modeling, education, and efficiency improvements across industries), the returns to society will be enormous—new lifesaving medicines, more sustainable industries, personalized learning, and productivity growth that lifts incomes. Goldman Sachs research suggests AI could raise global GDP by trillions of dollars over the next decade or two [13]. Those are the carrots driving this spending.
However, to reach that promised land, we must navigate the cycle wisely. Are we at the beginning, middle, or end? I’d say we are at the end of the beginning. The concept of modern AI (deep learning, etc.) has proven itself enough to ignite serious investment—that was the beginning. We are now entering the phase of building it out and figuring it out—a middle phase where a lot of experimentation occurs, consolidation will follow, and winners and losers are sorted. The end of the cycle is still ahead – when AI is just a normal part of every software and every business process, much like electricity has become ubiquitous or the internet has become routine.
By the end of this decade, we’ll likely either be in that mature phase or recovering from a popped bubble (or both in sequence). In either case, AI will continue to be a driving force.
For designers and builders, the call to action is to focus on the human application. The technology is amazing, but our job is to make it usable, useful, user-friendly, and beautiful. If we do that, AI truly can deliver “massive productivity gains in the application layer,” fulfilling the lofty expectations. If we don’t, all the billions spent on chips and models might languish unused—a graveyard of good technology that didn’t find a product-market fit.
To conclude, the AI investment boom shows all the classic signs of a transformative tech cycle: huge promise, huge spending, skepticism, optimism, and the certainty of unintended consequences. History doesn’t repeat, but it rhymes. Right now, AI’s story is rhyming with railroads in the 1840s, with electricity in the 1920s, and with dot-coms in the 1990s. In each case, those who navigated the rhymes—understanding what would be different this time and what eternal truths still applied—came out on top. AI may indeed be the biggest boom yet, but size alone doesn’t ensure success. We must pair this unprecedented investment with wisdom from past booms and a relentless focus on real-world impact.
If we do that, the AI revolution won’t be remembered as “too much spending, too little benefit” but rather as the moment we laid the foundation or a generational leap in productivity and human capability. The rails have been laid, and the power lines strung—now it’s on us to build the vehicles and appliances that run on them and to train the conductors and users to leverage these new tools. That is how we’ll ensure this boom ends not in a bust but in a bright new era of innovation.