OpenAI, a leading force in artificial intelligence, is facing public accusations of using intimidation tactics against non-profits advocating for stronger AI safety regulations, particularly concerning California’s landmark SB 53. These allegations, including the issuance of broad subpoenas under the guise of its legal battle with Elon Musk, reveal a contentious environment surrounding AI governance and could signal significant regulatory and reputational risks for the company and the broader AI investment landscape.
The artificial intelligence world is buzzing with fresh controversy as OpenAI, the developer behind ChatGPT, stands accused of employing intimidation tactics against a small policy non-profit. The accusations center on efforts to undermine California’s SB 53, also known as the California Transparency in Frontier Artificial Intelligence Act, a pivotal piece of AI safety legislation.
At the heart of the matter is Nathan Calvin, the 29-year-old general counsel of Encode, a lean three-person non-profit. Calvin ignited a widespread discussion with a viral thread on X, alleging that OpenAI used its ongoing legal battle with Elon Musk as a pretext to target and intimidate critics, including Encode itself, by implying they were secretly funded by Musk.
The Allegations: Subpoenas and Legislative Interference
Calvin’s claims detail a jarring experience: he was served with a subpoena from OpenAI in August, delivered by a sheriff’s deputy while he and his wife were at dinner. Encode was also served. The subpoena demanded extensive personal communications related to SB 53, along with documents pertaining to OpenAI’s governance and investors, in the “broadest sense permitted.”
Another AI watchdog, Tyler Johnston, founder of the Midas Project, echoed Calvin’s experience, stating he received a similar demand for a vast array of communications. Johnston highlighted the overreach, noting it “asked for what was, practically speaking, a list of every journalist, congressional office, partner organization, former employee, and member of the public we’d spoken to about their restructuring.” This suggests a systemic approach by OpenAI to scrutinize its critics.
Beyond intimidation, Calvin specifically alleges that OpenAI sought to weaken the crucial requirements of SB 53. He points to a letter sent by OpenAI to Governor Newsom’s office during bill negotiations. This letter urged California to consider companies compliant with state rules if they had already signed safety agreements with federal agencies or joined international frameworks like the EU’s AI Code of Practice. Such a provision, Calvin argues, could have significantly narrowed the law’s reach, potentially exempting major AI developers like OpenAI from key safety and transparency mandates.
Why Speak Out Now?
Calvin explained to Fortune that he initially withheld details to avoid distracting from the merits of SB 53 while negotiations were ongoing. He decided to speak out after the bill was signed into law by Governor Gavin Newsom at the end of September. A recent LinkedIn post by Chris Lehane, OpenAI’s head of global affairs, claiming the company “worked to improve” SB 53, was the final push, as Calvin found this characterization deeply inconsistent with his experience.
OpenAI’s Stance and Internal Reactions
While OpenAI has not directly responded to recent requests for comment, Chief Strategy Officer Jason Kwon addressed the allegations on X. Kwon questioned Encode’s funding, hinting at Elon Musk’s involvement, and stated that OpenAI wanted to know “whether Encode is working in collaboration with third parties who have a commercial competitive interest adverse to OpenAI.” He characterized subpoenas as a standard legal tool.
However, the accusations have resonated deeply within OpenAI itself, drawing significant concern from both current and former employees. Joshua Achiam, OpenAI’s head of mission alignment, publicly expressed his unease on X, stating, “At what is possibly a risk to my whole career I will say: this doesn’t seem great.” He urged OpenAI to engage more constructively with its critics and to avoid becoming a “frightening power instead of a virtuous one.”
Adding to the internal dissent, Helen Toner, a former OpenAI board member who resigned after the failed 2023 attempt to oust Sam Altman, wrote that while OpenAI does great things, “the dishonesty & intimidation tactics in their policy work are really not.” These internal criticisms underscore a potential disconnect between OpenAI’s stated mission and its operational practices.
The Broader Implications for AI Investment and Regulation
This incident is more than just a public relations skirmish; it has profound implications for investors in the rapidly evolving AI sector. OpenAI’s alleged tactics highlight several key areas of concern:
- Regulatory Risk and Lobbying Intensity: The aggressive stance against SB 53 suggests that major AI developers are prepared to exert significant pressure to shape regulatory frameworks in their favor. This indicates a high-stakes environment where regulatory outcomes could dramatically impact business models and profitability. Investors should factor in potential lobbying costs and the risk of public backlash against companies perceived as undermining safety efforts.
- Governance and Ethical Questions: Allegations of intimidation from a small non-profit, coupled with internal dissent from senior OpenAI figures, raise serious questions about the company’s commitment to its founding non-profit mission of ensuring that AGI benefits humanity. For an organization built on public trust and ethical development, these claims could erode its standing and long-term valuation.
- Competitive Landscape and Paranoia: The explicit link to the Elon Musk lawsuit underscores the intense competitive pressures and perhaps paranoia within the frontier AI space. OpenAI’s justification for the subpoenas — to uncover potential funding from commercial competitors — reveals a defensive posture that could hinder collaborative safety efforts crucial for the industry’s sustainable growth. As reported by the San Francisco Standard, OpenAI appeared concerned about Musk’s funding of critics despite little evidence.
- Transparency vs. Control: While OpenAI’s lawyer claimed their actions were about “transparency in terms of who funded these organizations,” critics perceive it as an attempt to silence dissent. This tension between corporate interests and independent oversight is critical for investor assessment. Companies that alienate the very communities advocating for responsible development may face greater scrutiny and resistance in the long run.
Encode, for its part, has formally responded to OpenAI’s subpoena, asserting that it would not turn over documents as it is not funded by Elon Musk. According to Calvin, OpenAI has not responded since. This standoff could set a precedent for how powerful AI companies interact with public policy advocates.
For investors, these events serve as a potent reminder that the growth of AI is not solely driven by technological breakthroughs. The ethical, regulatory, and governance landscapes are equally vital. Companies that navigate these challenges with integrity and transparency are more likely to build sustainable value, while those perceived as resorting to aggressive tactics may face significant headwinds in public trust and regulatory compliance.