Leading AI researchers and CEOs, including Sam Altman and Demis Hassabis, express concerns about AI risks and call for global attention.
AI industry leaders express concerns over AI's potential to cause extinction
Top AI experts, engineers, and CEOs recently raised serious concerns about the potential risks of artificial intelligence (AI).
They compare these risks to pandemics and nuclear war, stressing the importance of immediate global action.
Prominent figures including Sam Altman (OpenAI CEO), Demis Hassabis (Google DeepMind CEO), and esteemed AI researchers Geoffrey Hinton and Youshua Bengio co-sign statement stressing the crucial importance of preventing AI-related extinction, published by the San Francisco-based non-profit, the Center for AI Safety.
"AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI," states the introduction to a concise 22-word statement. " Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously. "
The introduction to the 22-word statement acknowledges the ongoing discussions among AI experts, journalists, policymakers, and the public regarding the various risks associated with AI.
It recognises the challenge of expressing concerns about the most severe risks posed by advanced AI.
The statement aims to overcome this obstacle by encouraging dialogue and building a shared understanding among the significant number of experts and public figures who genuinely acknowledge and prioritise these risks.
The recent statement adds to the ongoing and sometimes contentious conversation about AI safety.
Earlier this year, prominent figures such as Steve Wozniak, Elon Musk, and 1000 others signed an open letter calling for a six-month "pause" in AI development.
Their letter expressed worries about the rapid advancement and deployment of powerful AI systems that lack transparency, predictability, and effective control.