Acting on AI: California Targets Tech Transparency with New Bill
1 October 2025
California Governor Gavin Newsom has signed a bill which aims to “enhance online safety by installing commonsense guardrails on the development of frontier artificial intelligence (AI) models.
Signed into law this week, Senate Bill 53 (SB 53) – also known as the Transparency in Frontier Artificial Intelligence Act – looks to “build public trust while also continuing to spur innovation in these new technologies”, as well as “advance California’s position as a national leader in responsible and ethical AI”.
The bill means that major AI companies will need to provide deeper disclosure over their safety protocols for the technology. The law has been signed despite reported lobbying efforts from big tech companies such as Meta, the parent company of Facebook, and OpenAI, the company which created ChatGPT.
“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance,” said Newsom. “AI is the new frontier in innovation, and California is not only here for it – but stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation that builds public trust as this emerging technology rapidly evolves.”
AI is a key priority for investors, politicians and regulators alike, with its rapid growth and ever-increasing prevalence creating a litany of risks and opportunities which are challenging to balance and navigate. Shareholders are also increasingly pressing companies on various elements of AI, including governance as reported by Minerva Analytics, with Minerva Analytics’ Shareholder Proposal Voting Trends Report 2025 published this week spotlighting such resolutions during the early months of 2025.
SB 53 builds on recommendations made in California’s report on “sensible AI guardrails” released in March, which was created by a group of world-leading AI academics and experts convened at Newsom’s request. The report, which targeted guidelines based on “an empirical, science-based analysis of the capabilities and attendant risks of frontier models”, included recommendations on ensuring evidence-based policymaking, balancing the need for transparency with considerations such as security risks, and determining the “appropriate level of regulation in this fast-evolving field”.
“SB 53 is responsive to the recommendations in the report — and will help ensure California’s position as an AI leader,” read a statement on Newsom’s website. “This legislation is particularly important given the failure of the federal government to enact comprehensive, sensible AI policy. SB 53 fills this gap and presents a model for the nation to follow.”
The bill sets requirements for frontier AI developers to strengthen five key elements related to the technology: accountability; innovation; responsiveness; safety and transparency.
Accountability-focused requirements aim to protect whistleblowers who disclose significant health and safety risks posed by frontier models, and creates a civil penalty for noncompliance, enforceable by the California Attorney General’s office.
Innovation-minded requirements establish a new consortium within the Government Operations Agency, known as CalCompute, to develop a framework for creating a public computing cluster. The consortium will advance the development and deployment of artificial intelligence that is safe, ethical, equitable and sustainable, by fostering research and innovation.
The responsiveness-related requirement directs the California Department of Technology to annually recommend appropriate updates to the law based on multistakeholder input, technological developments and international standards. The safety-centred requirement Creates a new mechanism for frontier AI companies and the public to report potential critical safety incidents to California’s Office of Emergency Services.
The Transparency-focused requirements mean that large frontier developers must publicly publish a framework on its website describing how the company has incorporated national standards, international standards and industry-consensus best practices into its frontier AI framework.
Newsom’s statement highlighted that 32 of the 50 top AI companies worldwide are California-based, the state was responsible for almost 16% of all US AI job postings in 2024, ahead of almost 9% in Texas and 6% in New York according the 2025 Stanford AI Index. Additionally, more than half of global venture capital funding went to AI and machine learning startups in 2024, while three of the four companies to have passed the U$3 trillion valuation mark— Google, Apple and Nvidia — are all California-based tech companies involved in AI and “have created hundreds of thousands of jobs” according to Newsom’s statement.
Newsom also said that SB 53 “recognises that meaningful oversight of AI safety, particularly as it relates to matters of national security, involves joint work with the federal government. Should the federal government or Congress adopt national AI standards that maintain or exceed the protections in this bill, subsequent action will be necessary to provide alignment between policy frameworks – ensuring businesses are not subject to duplicative or conflicting requirements across jurisdictions.” He added that SB 53 “fulfils this obligation by authorising a compliance pathway for critical incident-reporting requirements”.
Alongside California, three other states – Colorado, Utah and Texas – have signed AI governance legislation into law, with Utah’s having already come into effect and Colorado and Texas’ having an effective date of 1 January 2026. New York, meanwhile, has an item of AI governance legislation in cross committee.
Separately, two senators – one Democrat and one Republican – have this week reportedly introduced an AI risk evaluation bill which would create an evaluation programme at the Department of Energy for advanced AI systems to “collect data on the likelihood of adverse AI incidents, such as loss-of-control scenarios and weaponization by adversaries”.
Addressing AI risks is also a key priority beyond US borders, with the EU AI Act coming into force in August 2024, with the objective of “foster[ing] responsible AI development and deployment” in the bloc. According to the EU, the AI Act “addresses potential risks to citizens’ health, safety, and fundamental rights”, as well as providing developers and deployers with “clear requirements and obligations regarding specific uses of AI while reducing administrative and financial burdens for businesses”.
AI risk is a key priority for companies, with 60% of S&P 500 companies identifying material AI risks to their business, and UK pension giant Railpen creating its own AI Governance Framework in August which aims to translate responsible AI principles into actionable practices. As reported by Minerva Analytics, asset owners and service providers are therefore stepping up with new research and monitoring frameworks designed to help investors and companies navigate this evolving landscape.
Responding to rising interest in this area, Minerva Analytics rolled out additional research and voting guidelines to evaluate corporate disclosures against globally recognised cyber governance standards such as the OECD AI Principles and the G7 Hiroshima AI Process at the start of peak season 2025. These new guidelines supplemented Minerva’s existing cyber-governance questions first adopted in 2016 offering investors a robust lens through which to assess board readiness with a clear focus on governance and disclosure quality, particularly in key regulatory disclosures such as annual reports, CSR disclosures, as well as corporate websites.
Custom Voting Policy
Minerva is currently offering a complimentary Voting Policy Audit— vote your values, align every vote with your ESG and stewardship goals including AI guidelines.
You can read more of our articles by clicking here.
Last Updated: 1 October 2025