Dozens of artificial intelligence (AI) experts, including the CEOs of OpenAI, Google DeepMind and Anthropic, recently signed an open statement published by the Center for AI Safety (CAIS).
We just put out a statement:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Signatories include Hinton, Bengio, Altman, Hassabis, Song, etc.https://t.co/N9f6hs4bpa
(1/6)
— Dan Hendrycks (@DanHendrycks) May 30, 2023
The statement contains a single sentence:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Among the document’s signatories are a veritable “who’s who” of AI luminaries, including the “Godfather” of AI, Geoffrey Hinton; University of California, Berkeley’s Stuart Russell; and Massachusetts Institute of Technology’s Lex Fridman. Musician Grimes is also a signatory, listed under the “other notable figures” category.
Related: Musician Grimes willing to ‘split 50% royalties’ with AI-generated music
While the statement may appear innocuous on the surface, the underlying message is a somewhat controversial one in the AI community.
A seemingly growing number of experts believe that current technologies may or will inevitably lead to the emergence or development of an AI system capable of posing an existential threat to the human species.
Their views, however, are countered by a contingent of experts w
Go to Source to See Full Article
Author: Tristan Greene