AI Sector Shifts Focus to Existential Risk Strategy

In a notable shift of strategy within the AI sector, industry leaders have started emphasizing 'existential risk', diverting attention from current problems like discrimination and misinformation to hypothetical dangers of AI. This approach mirrors historical tactics used by other industries, such as Big Tobacco, where the focus was shifted from existing evidence to calls for further investigation. A notable example includes recent calls by AI executives for a moratorium on new language models, which coincidentally also detracted from ongoing legislative discussions like the AI Act.
Strategically, this tactic serves to minimize current AI concerns while simultaneously positioning AI developers as the only capable guardians of their own technologies. The suggestion is that only they can address these looming threats, which potentially alters the regulatory landscape. By framing AI's risks in this way, the industry can thwart critical regulatory initiatives, thus increasing dependency on the very companies claiming to manage the risks, reminiscent of historical corporate maneuvers to evade regulation.