Leading experts warn of a risk of extinction from AI
Go Deeper.
Create an account or log in to save stories.
Like this?
Thanks for liking this story! We have added it to a list of your favorite stories.
AI experts issued a dire warning on Tuesday: Artificial intelligence models could soon be smarter and more powerful than us and it is time to impose limits to ensure they don't take control over humans or destroy the world.
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," a group of scientists and tech industry leaders said in a statement that was posted on the Center for AI Safety's website.
Sam Altman, CEO of OpenAI, the Microsoft-backed AI research lab that is behind ChatGPT, and the so-called godfather of AI who recently left Google, Geoffrey Hinton, were among the hundreds of leading figures who signed the we're-on-the-brink-of-crisis statement.
The call for guardrails on AI systems has intensified in recent months as public and profit-driven enterprises are embracing new generations of programs.
Turn Up Your Support
MPR News helps you turn down the noise and build shared understanding. Turn up your support for this public resource and keep trusted journalism accessible to all.
In a separate statement published in March and now signed by more than 30,000 people, tech executives and researchers called for a six-month pause on training of AI systems more powerful than GPT-4, the latest version of the ChatGPT chatbot.
An open letter warned: "Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources."
In a recent interview with NPR, Hinton, who was instrumental in AI's development, said AI programs are on track to outperform their creators sooner than anyone anticipated.
"I thought for a long time that we were, like, 30 to 50 years away from that. ... Now, I think we may be much closer, maybe only five years away from that," he estimated.
Dan Hendrycks, director of the Center for AI Safety, noted in a Twitter thread that in the immediate future, AI poses urgent risks of "systemic bias, misinformation, malicious use, cyberattacks, and weaponization."
He added that society should endeavor to address all of the risks posed by AI simultaneously. "Societies can manage multiple risks at once; it's not 'either/or' but 'yes/and.' " he said. "From a risk management perspective, just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well."
NPR's Bobby Allyn contributed to this story.
Copyright 2023 NPR. To see more, visit https://www.npr.org.