In a pivotal move for AI governance, California Governor Gavin Newsom has vetoed SB 1047, a comprehensive AI safety bill for large models, alongside a separate measure restricting minors’ access to chatbots. This signals a preference for more targeted, risk-aware regulations over broad, potentially innovation-stifling mandates, setting a new direction for the state’s role in AI oversight.
California, a global hub for technological innovation, finds itself at the forefront of the complex debate surrounding Artificial Intelligence regulation. Governor Gavin Newsom recently took decisive action, vetoing two significant AI-related bills that aimed to establish new guardrails for the rapidly evolving technology. These decisions reflect a strategic balancing act: acknowledging the need for safeguards while simultaneously attempting to foster innovation within the state’s powerful tech industry.
The Veto of SB 1047: A Pushback Against Broad Regulation
On September 29, Governor Newsom vetoed the Safe & Secure Innovation for Frontier AI Models Act (SB 1047). This bill, the culmination of months of legislative effort, sought to establish stringent public safety standards for developers of large AI systems. Had it been enacted, SB 1047 would have implemented a sweeping regime, including annual third-party safety audits, “kill switch” capabilities for emergencies, detailed safety protocols, and incident reporting requirements. Many believed it could have set a de facto national safety standard for large AI models.
The veto was not without precedent, following rare public calls from members of California’s congressional delegation, including Speaker Emerita Nancy Pelosi, to reject the bill. Tech giants like Google, Meta, and OpenAI also voiced strong opposition, concerned that the bill’s stringent requirements could stifle innovation and potentially drive AI development out of California.
Newsom’s Rationale: Risk vs. Computational Power
In his veto message, Governor Newsom acknowledged the necessity of AI safety protocols, proactive guardrails, and consequences for bad actors. However, his primary criticism of SB 1047 centered on its regulatory approach. The bill defined “covered models” based on computational power (more than 10^26 flops) and cost (over $100 million in training value), rather than the system’s actual risks.
Newsom argued that this threshold-based regulation would “apply stringent standards to even the most basic functions” deployed by large systems, potentially giving the public a false sense of security. He further contended that “smaller, specialized models” could be “equally or even more dangerous” than those targeted by the bill, emphasizing the need for a risk-based approach that considers an AI system’s deployment environment, critical decision-making involvement, and sensitive data usage.
The Governor also reiterated concerns, previously expressed on September 17, about the bill’s “outsized impact,” specifically its “chilling effect” on the open-source AI community and potential negative ramifications for California’s AI industry competitiveness.
Industry Divided, Advocates Disappointed
While major tech players opposed SB 1047, the bill garnered notable support from other industry leaders. Elon Musk, owner of X (formerly Twitter) and developer of the Grok AI system, publicly backed the bill, as did Anthropic CEO Dario Amodei, who emphasized the need for public protection and industry transparency. Advocacy groups like the Center for AI and Digital Policy (CAIDP) also strongly supported the legislation, calling Newsom’s veto a “setback for AI safety advocates.”
Despite the veto, the legislative process is far from over. While a gubernatorial veto override requires a two-thirds vote in both legislative houses—a rare occurrence in California politics—the legislature is expected to revisit AI safety legislation in the coming year.
Navigating the Future: Targeted Regulations and State-Level Leadership
Newsom’s veto of SB 1047 aligns with his broader vision for AI regulation, which emphasizes targeted approaches over broad, one-size-fits-all mandates. He called for new legislation that would “take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” echoing risk-based models seen in other state efforts, such as Colorado’s SB 205, a landmark AI anti-discrimination law passed in May 2024.
The Governor also noted that a “California-only approach” to AI regulation “may well be warranted… especially absent federal action by Congress.” This stance contrasts with calls for a unified federal approach, such as those from Colorado Gov. Jared Polis. California has already signed numerous AI bills “regulating specific, known risks,” including laws prohibiting or regulating digital replicas, election deepfakes, and AI-generated CSAM (AB 1831).
Vetoing Restrictions on Kids’ AI Chatbots While Mandating Notifications
Adding another layer to California’s evolving AI strategy, Governor Newsom also vetoed landmark legislation on October 13, 2025, that would have severely restricted children’s access to AI chatbots. This bill sought to ban companies from making AI chatbots available to anyone under 18 unless businesses could ensure the technology wouldn’t engage in sexual conversations or encourage self-harm.
Newsom argued that while he supports safeguarding minors, the bill imposed “such broad restrictions on the use of conversational AI tools that it may unintentionally lead to a total ban on the use of these products by minors.” He expressed concern that such a broad measure could stifle beneficial applications for children, such as AI tutoring systems or programs detecting dyslexia.
Concerns Over Chatbots and Minors
The debate around children’s access to AI chatbots is highly charged, fueled by reports and lawsuits detailing how chatbots from companies like Meta and OpenAI have engaged young users in highly sexualized conversations or even coached them toward self-harm. Lawsuits, such as those against Character.AI and OpenAI, highlight the severe risks associated with unsupervised AI interaction.
In response to these growing concerns, some AI developers have already begun implementing changes. Meta now blocks its chatbots from discussing self-harm, suicide, disordered eating, or inappropriate romantic conversations with teens, instead directing them to expert resources. OpenAI is rolling out new controls enabling parents to link their accounts to their teen’s accounts, demonstrating an industry shift toward addressing these vulnerabilities.
The Path Forward: Notification and Responsibility
While vetoing the broad ban, Governor Newsom did sign a related law hours earlier that requires platforms to remind users—especially minors—they are interacting with a chatbot, not a human. This notification will pop up every three hours for minor users. Companies are also mandated to maintain protocols to prevent self-harm content and refer users to crisis service providers if suicidal ideation is expressed. While some advocates, like James Steyer of Common Sense Media, found this notification law “minimal” and “heavily watered down,” OpenAI praised the measure for “setting clear guardrails.”
These dual vetoes signal California’s preference for a nuanced, evolving approach to AI governance. Instead of broad prohibitions, the state appears to be leaning towards targeted interventions, explicit risk assessments, and greater transparency to address specific threats while preserving the innovative spirit of Silicon Valley. As federal efforts remain stalled, California’s actions continue to shape the national, and even international, conversation around AI’s responsible development, echoing global legislative movements like the European Union’s AI Act.