Amidst intense debate, Goldman Sachs has delivered a complex verdict on the AI market: it’s not a bubble, but a burgeoning revolution with trillions at stake. While projections point to immense economic upside and productivity gains, investors must contend with critical challenges, including energy demands, chip supply limitations, and the nuanced race to define truly transformative use cases, suggesting a cautious, long-term strategic approach is paramount.
The rise of Artificial Intelligence (AI), particularly Generative AI tools like ChatGPT, has triggered a wave of both excitement and apprehension across global financial markets. The central question for investors remains: are we witnessing the formation of an unprecedented bubble, reminiscent of the dot-com era, or are we on the cusp of a profound technological revolution? Goldman Sachs, a prominent voice in the financial sector, has weighed in, offering a multifaceted perspective that suggests the latter, albeit with significant caveats for the discerning investor.
The Early Stages of a Revolution: Goldman Sachs’ Optimistic Forecasts
Initially, Goldman Sachs was among the strongest proponents of AI’s transformative power. Back in Spring 2023, the firm projected that the widespread adoption of Generative AI could significantly boost global GDP by as much as 7% annually, translating to over an additional $7 trillion each year, as reported by Bloomberg. This optimism stemmed from AI’s ability to automate a larger share of work tasks, leading to substantial labor cost savings and enhanced productivity.
More recently, in an October 2025 note, Goldman reiterated its stance that the AI boom is far from a bubble; in fact, it has “barely begun.” The firm projects that the long-term value generated by AI productivity could add an staggering $20 trillion to the US economy, with approximately $8 trillion flowing to companies as capital income. Joseph Briggs, Senior Global Economist at Goldman Sachs, estimates a baseline of 15% cumulative gross upside to US labor productivity and GDP growth following widespread adoption, which they anticipate unfolding over a decade.
Furthermore, Goldman analysts argue that current AI investment levels are modest by historical standards. While billions are being poured into chips, servers, and data centers, AI-related investment in the US remains under 1% of GDP. This contrasts sharply with past technological revolutions like railroad expansion, the electrification wave of the 1920s, and the dot-com era, which saw investments ranging from 2% to 5% of GDP, as highlighted in a Business Insider report citing Goldman Sachs. This comparison suggests that the current spending, estimated at $300 billion annually in 2025, is appropriate given the technology’s long-term potential returns.
The Complicated Reality: Skepticism and Significant Hurdles
Despite the immense optimism, Goldman Sachs and other experts acknowledge substantial hurdles, making the relationship with Generative AI “complicated.” Critics have identified what some call the “4 Crises” facing developers:
- The Data Crisis: The vast troves of data required to train large language models (LLMs) are diminishing in value as publishers and platforms restrict access.
- The Compute Crisis: The insatiable demand for high-performance Graphics Processing Units (GPUs) creates bottlenecks in chip supply, echoing the semiconductor shortages of the pandemic era.
- The Power Crisis: Companies developing LLMs consume increasing amounts of energy, straining existing energy infrastructure. Carly Davenport, Senior US Utilities Equity Research Analyst at Goldman Sachs, forecasts data centers could double their electricity usage by 2030, increasing their share of total US power demand from 3% to 80% by then.
- The Use Case Crisis: Despite widespread applications, some pessimists question if Generative AI has found its “killer app” in the enterprise context, moving beyond what they term “parlor trick” status.
Daron Acemoglu, Institute Professor at MIT, forecasts muted impacts on labor productivity and GDP in the next decade due to a low likelihood of “truly transformative changes” and concerns about premature automation. Jim Covello, Head of Global Equity Research at Goldman Sachs, questions what $1 trillion problem AI will solve, noting that replacing low-wage jobs with tremendously costly technology is the opposite of prior tech transitions he’s witnessed.
Navigating the AI Investment Landscape: Beyond First Movers
A crucial point of caution from Goldman Sachs revolves around the potential winners. While significant capital expenditure is flowing into AI, history suggests that “first movers” in infrastructure buildouts, like railroads and telecommunications, often do not become the biggest long-term winners. Later entrants, who acquire assets cheaply after early overbuilds, sometimes capture better returns. This dynamic could repeat in the AI era, especially given hardware’s rapid depreciation.
The lack of clarity in the current AI market structure raises questions about whether today’s leaders will be tomorrow’s long-run champions. Early adopters, for instance, are already hedging their bets by using multiple AI models rather than committing to a single ecosystem, potentially weakening the advantages of current incumbents.
Solutions and Investor Strategies: Thinking Smaller and Smarter
Experts from IBM, like Distinguished Engineer Chris Hay and Global Head of Tech, Data and AI Strategy Brent Smolinski, believe that solutions to these crises are emerging. When faced with constraints in data, compute, or power, engineers become more creative:
- Synthetic Data: Algorithmically created data can supplement real-world data, addressing the data crisis.
- Efficient Models: The focus is shifting to smaller, more powerful models. SSM-based models (e.g., Mistral Codestral Mamba, Jamba 1.5, Falcon Mamba 1.5) and hybrid architectures are gaining traction due to increased efficiency and context length capabilities. Techniques like quantization and fine-tuning also make models less power-hungry and more adaptable.
- Agentic Workflows: This multi-step approach leverages LLMs to orchestrate specialized AI agents for specific tasks. For instance, in customer support, an LLM might categorize an inquiry and then trigger agents for account verification, problem diagnosis, solution formulation, and feedback collection. This reduces reliance on a single, monolithic LLM, making processes more efficient and less prone to hallucinations.
These developments emphasize that while LLMs are powerful, they are often just one tool in a broader AI toolkit. For investors, this implies that focusing solely on the “biggest” or “most hyped” models might be short-sighted. Instead, understanding the underlying infrastructure, the development of more efficient technologies, and the practical application of AI in enterprise contexts will be key.
Goldman Sachs’ Oppenheimer suggests investors adopt the Pearl framework for a cautious approach. This involves diversifying portfolios across five groups:
- Pioneers: The innovators driving new AI technologies.
- Enablers: Companies commercializing foundational AI technologies.
- Adapters: Businesses changing their models to integrate AI.
- Reformers: New market entrants leveraging AI to create new industries.
- Laggards: Established companies that don’t require significant immediate changes but will eventually integrate AI.
The Investor’s Path Forward
Ultimately, Goldman Sachs concludes that the AI theme still has “room to run,” either because the technology will deliver on its vast promise or because bubbles historically take a long time to burst. For members of the onlytrustedinfo.com community, this complex outlook underscores the importance of thorough research, diversification, and a long-term investment horizon.
The choice to invest in AI must align with individual investment goals and risk tolerance. While the potential for trillions in economic value is compelling, the path forward is marked by both revolutionary breakthroughs and significant challenges that demand a nuanced understanding rather than blind enthusiasm or paralyzing skepticism. Focusing on companies addressing the core infrastructure limitations, developing efficient models, and demonstrating clear, valuable enterprise use cases will be crucial for navigating this evolving landscape.