NEW YORK: Open artificial intelligence (AI) leaders and technologists have warned that artificial intelligence could lead to the end of humanity.
According to media reports, dozens of people have endorsed a statement published on the Center for AI Safety’s webpage, which states that reducing the risk of extinction from AI should be accompanied by other societal-level risks such as pandemics and nuclear weapons. Along with war, there must be a global priority.
Sam Altman, chief executive of OpenAI, the maker of ChatGPT, Demas Hassabis, chief executive of Google DeepMind, and Dario Amoudi of Anthropic have also supported this statement.
Similarly, Dr. Geoffrey Hinton, who previously issued warnings about the dangers of AI, has also supported the Center for AI Safety’s statement. Earlier in March 2023, experts including Tesla CEO Elon Musk signed a letter calling for a ban on AI technology.
OpenAI recently suggested in a blog post that superintelligence could be regulated like nuclear power. The Center for AI Safety said that we would likely need something like the International Atomic Energy Agency for superintelligence efforts.