U.S. Public Demands Regulation of Superhuman AI Development

Key Points
- 1Majority favor strong AI regulations akin to pharmaceuticals.
- 2Call for pause on AI until proven safe and controllable.
- 3Concerns over public safety increase demand for oversight.
A national survey conducted by the Future of Life Institute between September 29 and October 5, 2025, indicates that 73% of U.S. adults support robust regulations on AI development, particularly concerning advanced systems like expert-level and superhuman AI. The poll revealed significant public anxiety about the rapid pace of AI technologies, with many respondents advocating for regulatory frameworks that guarantee safety and control before further advancements can occur. Moreover, 64% of participants suggested a complete halt on developing superhuman AI until its safety can be established.
This overwhelming support for regulatory measures signals a pivotal shift in public sentiment towards AI development in the U.S. The call for a pause highlights growing concerns about potential existential risks associated with unchecked AI advancements, such as concentration of power and societal impacts. As more than half of those surveyed express that the government must enhance regulation, it raises questions about the future engagement of companies in a landscape shaped by stricter oversight, thereby potentially influencing national AI strategies and increasing dependency on regulatory measures rather than technological innovation alone.
Free Daily Briefing
Top AI intelligence stories delivered each morning.