Policy·Europe

AI Executives Shift Focus to Future Risks, Influence AI Act

Global AI Watch · Editorial Team··4 min read
AI Executives Shift Focus to Future Risks, Influence AI Act
Perspectiva editorial

This is the first major shift where AI companies leverage existential risk to shape regulatory landscapes by 2026.

What Changed

In March 2023, executives from AI companies, such as Elon Musk, called for a moratorium on developing new language models. This strategy marked the first notable instance where AI companies pivoted from addressing present harm to highlighting potential future risks. This resembles the approach taken by Big Tobacco in the 1950s but differs significantly by focusing on hypothesized existential threats, unlike the more immediate harm minimization seen in industries like nuclear energy.

Strategic Implications

This strategy could significantly augment the power of AI companies, as they position themselves as custodians of existential risk. By doing so, they influence the regulatory discussions, potentially curtailing stringent regulations such as the AI Act. However, this narrative might erode the leverage of policymakers unless they effectively counterbalance this rhetoric with current empirical harms like misinformation and privacy violations.

What Happens Next

Expect heightened dialogue around AI governance frameworks by 2026. Key actors such as the European Commission and U.S. lawmakers are likely to enact regulations that could either endorse or challenge this existential framing by late 2026. The strategic landscape will force policymakers to balance between immediate regulatory demands and the allure of long-term existential risk mitigation.

Second-Order Effects

Should AI companies succeed in dominating the narrative, the regulatory influence may see a subtle shift towards voluntary compliance frameworks. This could impact adjacent markets like cybersecurity and data privacy, where regulation may either tighten or be deprioritized. Additionally, a potential alignment towards these narratives might create increased dependency on proprietary governance tools offered by Big AI firms.

Free Daily Briefing

Top AI intelligence stories delivered each morning.

Subscribe Free →

Explore Trackers