OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters
Overview
On April 10, 2026, Wired reported that OpenAI is actively supporting Illinois state bill SB 3444, a legislative measure designed to shield AI laboratories from liability in catastrophic scenarios involving their models. This marks a strategic pivot for OpenAI, moving from opposing liability bills to backing specific exemptions for "critical harms," defined as incidents causing death or serious injury to 100 or more people, or at least $1 billion in property damage. The move aligns with broader Silicon Valley efforts to prevent fragmented state-level regulations while securing legal protections for frontier model deployment.
Key Highlights
- Liability Shield Conditions: AI labs are exempt from liability for critical harms if they did not intentionally or recklessly cause the incident and have published safety, security, and transparency reports.
- Frontier Model Definition: The bill defines frontier models as any AI system trained using more than $100 million in computational costs, targeting major labs like OpenAI, Google, xAI, Anthropic, and Meta.
- Critical Harm Scope: Includes bad actors using AI to create chemical, biological, radiological, or nuclear weapons, or AI autonomously committing criminal offenses leading to extreme outcomes.
- OpenAI Stance: Spokesperson Jamie Radice stated, "We support approaches like this because they focus on what matters most: Reducing the risk of serious harm... while still allowing this technology to get into the hands of the people and businesses."
- Federal Harmonization: OpenAI's Caitlin Niedermeyer testified in favor of a federal framework to avoid "a patchwork of inconsistent state requirements," echoing the Trump administration's crackdown on state AI safety laws.
- Public Opposition: Scott Wisor of the Secure AI project noted 90% of polled Illinois residents oppose exempting AI companies from liability, citing existing bills that increase liability instead.
- Strategic Shift: Experts note SB 3444 is more extreme than previous bills OpenAI supported, signaling a hardened legislative strategy amidst growing safety concerns like those raised by Anthropic's Claude Mythos.
Technical Details
While primarily legislative, the bill establishes technical thresholds for regulatory applicability. The $100 million compute cost threshold serves as a proxy for model capability, effectively creating a regulatory moat that applies only to large-scale frontier developers. Compliance requires the publication of specific documentation: safety reports, security audits, and transparency disclosures hosted on the developer's website. The legislation distinguishes between intentional misconduct and autonomous model behavior, granting immunity for the latter provided documentation standards are met. This creates a compliance-based safe harbor rather than a performance-based safety guarantee.
Impact & Significance
This development signals a critical juncture in AI governance, where industry leaders seek to codify liability limits before catastrophic incidents occur. For developers, it suggests that transparency reporting may become a legal shield rather than purely a safety mechanism. The push for federal harmonization challenges state-level innovation in AI safety, potentially stifling stricter local regulations in favor of industry-friendly national standards. If passed, SB 3444 could set a precedent for other states, fundamentally altering the legal risk profile for deploying high-capability AI systems in high-stakes environments. The tension between safety advocacy groups and major labs highlights the growing friction over who bears the cost of AI failures. Ultimately, this bill could cement the dominance of well-capitalized labs capable of meeting the $100 million compute threshold while shielding them from the consequences of autonomous model actions.