Policy·Global

AI Safety Index Reveals Gaps in Company Practices and Commitments

Global AI Watch · Editorial Team··5 min read
AI Safety Index Reveals Gaps in Company Practices and Commitments
Perspectiva editorial

This marks a pivotal increase in transparency but indicates a wide execution gap that policy must address by 2027.

What Changed

The 2025 winter edition of the AI Safety Index, released by the Future of Life Institute, evaluated major AI companies like Anthropic, OpenAI, and Google DeepMind on criteria like Risk Assessment and Governance. This marks the first scorecard participation for five firms, providing new transparency despite deficiencies in their safety practices. Compared to global standards, these practices still lag markedly.

Strategic Implications

Participation from leading companies indicates a shift towards increased transparency, yet reveals their structural weaknesses in safety compliance. The index underscores a persistent regulatory void, as US AI firms operate with fewer constraints than industries like food service. This regulatory mismatch becomes more alarming as AI capabilities grow, increasing potential risks.

What Happens Next

As AI firms lobby against tighter regulations, advocacy for stricter safety standards may intensify, impacting policy decisions by 2027. Companies failing to meet safety benchmarks could face heightened scrutiny and potential sanctions, driving some towards rapid compliance upgrades.

Second-Order Effects

Lack of preparedness may affect AI supply chains, with downstream impacts on enterprises relying on more secure AI integrations. Vulnerabilities might prompt global regulatory spillover, influencing standards in adjacent markets.

Free Daily Briefing

Top AI intelligence stories delivered each morning.

Subscribe Free →

Explore Trackers