Just as the social media revolution promised connection but delivered polarization and misinformation, the burgeoning AI era, driven by the same ‘move fast and break things’ mantra, risks repeating these foundational mistakes on an exponentially larger scale, impacting everything from energy infrastructure to global governance and presenting unique challenges for discerning investors.
For over a decade, the social media revolution transformed global affairs, economies, and our daily lives. Initially seen as an intoxicating wave of innovation, it quickly became a breeding ground for unintended consequences: misinformation, algorithmic bias, political manipulation, and societal polarization. Today, as Artificial Intelligence (AI) reshapes industries and governance at an even faster pace, a critical question looms for investors and citizens alike: Are we on the verge of repeating the same mistakes, but with exponentially greater stakes?
The Social Media Playbook: Innovation, Growth, and Unforeseen Consequences
In its nascent stages, the internet promised a decentralized, community-driven space. Early social platforms were viewed as tools for activism, creativity, and engagement. However, the introduction of substantial capital and advertising revenue drastically altered this dynamic. As highlighted by Sean Evins, who witnessed the social media revolution firsthand in roles at Twitter and Meta, the initial intoxication gave way to serious problems.
Social media companies, driven by ad revenue, designed platforms to maximize “user engagement.” This often meant prioritizing sensational, polarizing, or emotionally charged content, which led to a rapid rise in:
- Misinformation and Disinformation: Algorithms amplified false narratives, making it difficult to discern truth from fiction.
- Algorithmic Bias: Models trained on existing data inadvertently reinforced societal biases.
- Polarization: Echo chambers formed, solidifying existing viewpoints and making nuanced debate nearly impossible.
- Political Manipulation: Foreign and domestic actors exploited platforms to influence public opinion and election outcomes.
By the time institutions attempted to regulate these platforms, they were already too big, too embedded, and too essential, rendering control difficult to reclaim. The critical lesson learned, according to Evins, is that waiting for a technology to become ubiquitous before addressing safety, governance, and trust means you’ve already lost control, a point echoed by a contributing piece on Fortune.com.
AI: A Public Utility with Exponential Risks
The parallels with AI are striking and concerning. AI is no longer just a tech issue; it is rapidly becoming the “substrate for everything from energy to defence.” The underlying models are improving, deployment costs are plummeting, and the stakes are rising exponentially. The same mantras—”build fast, launch early, scale aggressively, win the race”—are driving innovation, but this time, we’re not just disrupting media; we’re reinventing society’s core infrastructure.
AI is emerging as a public utility, influencing resource allocation, decision-making, and institutional functions. The consequences of missteps are far greater than with social media. Familiar risks are reappearing:
- Models trained on opaque data lack external oversight.
- Algorithms are optimized for performance over safety.
- Closed systems make decisions that are not fully understood.
- Global governance struggles to keep pace with rapid capital flows and technological advancement.
The dominant narrative, regrettably, remains “We’ll figure it out as we go,” a strategy that proved disastrous for social media.
Energy as a Critical Case Study
The energy sector provides a stark illustration of AI’s profound impact. AI data centers consume significantly more power—10 to 50 times more than traditional systems—with large model training requiring energy equivalent to 120 homes annually. Projections indicate AI workloads could drive a 2-3x increase in global data center electricity demand by 2030. AI is already being integrated into systems for optimizing grids, forecasting outages, and managing renewables.
Without proper oversight, this integration poses serious risks:
- AI systems could prioritize industrial customers over residential areas during peak demand, potentially leading to social inequality.
- During crises, AI could make rapid, inscrutable decisions, leaving entire regions without power and without human accountability or override mechanisms.
This isn’t about choosing sides, but about designing systems that are safe, transparent, and work for society as a whole.
Lessons for a New Playbook: Mitigating Investment Risk
The prevailing “move fast, ask forgiveness, resist oversight” playbook of the social media era is not sustainable for AI. We still have a window to shape the governance of this technology, but it is rapidly closing. For investors, understanding and advocating for responsible AI development is not just ethical, but crucial for long-term value creation and mitigating systemic risks.
Key principles for a new approach, as outlined in recent discussions, include:
- Govern from the Beginning: Regulation must be a design principle, not a retroactive fix. This includes incentive structures that prioritize safeguards over raw efficiency to prevent bias and system failures.
- Treat Infrastructure as Infrastructure: Energy, compute, and data centers—the foundational components of AI—must be built with long-term governance and resilience in mind, not merely short-term optimization.
- Robust Testing and Auditing: Critical AI systems require rigorous testing, “red teaming,” and independent audits before widespread deployment. Reversing harmful design choices once embedded at scale is nearly impossible.
- Cross-Sector Alignment: Public, private, and global actors must collaborate on shared standards and interoperable systems. Events like ADIPEC, which convene diverse stakeholders, are vital for debating and shaping the future of AI in critical sectors.
- Align Recommendation Systems with Societal Values: As discussed by the Brookings Institution regarding Elon Musk’s Twitter takeover, the core issue with social media was its AI-based recommendation systems prioritizing user engagement (and thus ad revenue) over public good. AI systems must be programmed with broader social objectives, not just short-term metrics. This means fostering content moderation beyond mere “red lines” and giving users more control over the information they receive.
- Prioritize AI Literacy: For both policymakers and ordinary citizens, improving AI literacy is crucial. As we learned with social media, understanding how these tools work, their limits, and their potential for manipulation allows individuals to be “agents who use the tools, rather than the subjects who get manipulated.”
The investment landscape for AI will be profoundly shaped by how these governance challenges are addressed. Companies that proactively integrate safety, ethical design, and transparent governance into their AI development may ultimately build more resilient, trustworthy, and valuable platforms. Conversely, those that cling to the “move fast and break things” mentality risk not only societal backlash but also significant financial and reputational damage as regulatory scrutiny inevitably increases.
We stand at a pivotal moment. The social media revolution demonstrated the immense power and pitfalls of rapidly scaled technology. With AI, we have the opportunity to choose a different path—one that moves fast, but does so with foresight, integrity, and a commitment to building the next foundation of the modern world safely and beneficially.