OpenAI execs warn of “risk of extinction” from artificial intelligence in new open letter

0
150


Enlarge / An AI-generated picture of “AI taking up the world.”

Steady Diffusion

On Tuesday, the Heart for AI Security (CAIS) released a single-sentence statement signed by executives from OpenAI and DeepMind, Turing Award winners, and different AI researchers warning that their life’s work might probably extinguish all of humanity.

The transient assertion, which CAIS says is supposed to open up dialogue on the subject of “a broad spectrum of vital and pressing dangers from AI,” reads as follows: “Mitigating the danger of extinction from AI ought to be a world precedence alongside different societal-scale dangers equivalent to pandemics and nuclear struggle.”

Excessive-profile signatories of the assertion embody Turing Award winners Geoffery Hinton and Yoshua Bengio, OpenAI CEO Sam Altman, OpenAI Chief Scientist Ilya Sutskever, OpenAI CTO Mira Murati, DeepMind CEO Demis Hassabis, Anthropic CEO Dario Amodei, and professors from UC Berkeley, Stanford, and MIT.

This assertion comes as Altman travels the globe, taking meetings with heads of state concerning AI and its potential risks. Earlier in Might, Altman argued for regulations of his trade in entrance of the US Senate.

Contemplating its quick size, the CAIS open letter is notable for what it would not embody. For instance, it doesn’t specify precisely what it means by “AI,” contemplating that the time period can apply to something from ghost movements in Pac-Man to language fashions that may write sonnets in the style of a Forties wise-guy gangster. Nor does the letter recommend how dangers from extinction is perhaps mitigated, solely that it ought to be a “international precedence.”

Nonetheless, in a associated press release, CAIS says it desires to “put guardrails in place and arrange establishments in order that AI dangers don’t catch us off guard,” and likens warning about AI to J. Robert Oppenheimer warning concerning the potential results of the atomic bomb.

AI ethics specialists should not amused

An AI-generated image of a globe that has stopped spinning.
Enlarge / An AI-generated picture of a globe that has stopped spinning.

Steady Diffusion

This is not the primary open letter about hypothetical, world-ending AI risks that we have seen this 12 months. In March, the Way forward for Life Institute released a more detailed statement signed by Elon Musk that advocated for a six-month pause in AI fashions “extra highly effective than GPT-4,” which obtained broad press protection however was additionally met with a skeptical response from some within the machine-learning neighborhood.

Specialists who usually give attention to AI ethics aren’t amused by this rising open-letter pattern.

Dr. Sasha Luccioni, a machine-learning analysis scientist at Hugging Face, likens the brand new CAIS letter to sleight of hand: “Initially, mentioning the hypothetical existential danger of AI in the identical breath as very tangible dangers like pandemics and local weather change, that are very contemporary and visceral for the general public, offers it extra credibility,” she says. “It is also misdirection, attracting public consideration to at least one factor (future dangers) so they do not consider one other (tangible present dangers like bias, authorized points and consent).”



Source link