AI industry and researchers sign statement warning of 'extinction' risk
Source: CNN
Dozens of AI industry leaders, academics and even some celebrities on Tuesday called for reducing the risk of global annihilation due to artificial intelligence, arguing in a brief statement that the threat of an AI extinction event should be a top global priority.
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," read the statement published by the Center for AI Safety.
The statement was signed by leading industry officials including OpenAI CEO Sam Altman; the so-called "godfather" of AI, Geoffrey Hinton; top executives and researchers from Google DeepMind and Anthropic; Kevin Scott, Microsoft's chief technology officer; Bruce Schneier, the internet security and cryptography pioneer; climate advocate Bill McKibben; and the musician Grimes, among others.
The statement highlights wide-ranging concerns about the ultimate danger of unchecked artificial intelligence. AI experts have said society is still a long way from developing the kind of artificial general intelligence that is the stuff of science fiction; today's cutting-edge chatbots largely reproduce patterns based on training data they've been fed and do not think for themselves.
-snip-
Read more: https://www.cnn.com/2023/05/30/tech/ai-industry-statement-extinction-risk-warning/index.html
The statement is here:
https://www.safe.ai/statement-on-ai-risk
Signatories:
Other stories on this:
https://www.wired.com/story/runaway-ai-extinction-statement/
https://www.nbcnews.com/tech/tech-news/ai-risks-leading-humanity-extinction-experts-warn-rcna86791
https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html
https://www.cbsnews.com/news/ai-risk-of-extinction-warning/
https://www.washingtonpost.com/business/2023/05/30/ai-poses-risk-extinction-industry-leaders-warn/
However, Sam Altman, who has signed this, doesn't want the regulations the EU has planned. And some experts on AI and its risks to society believe warning about ultimate risks is partly done as a distraction from the risks already here:
Link to tweet
Link to tweet
Link to tweet
bucolic_frolic
(43,307 posts)These lame brains can't even see the flaws in that projection.
highplainsdem
(49,041 posts)BootinUp
(47,194 posts)Initech
(100,104 posts)Yeah there's reason to be concerned about AI!
Cheezoholic
(2,034 posts)To cause an extinction level event. We're doing it on our own.
slightlv
(2,840 posts)any extinction level events (other than those coming in the form of flaming balls from space) will be due to humans being stupid, greedy, and mean. I don't believe AI will directly cause an extinction level event. That doesn't mean I don't think it doesn't have the ability to do us harm, but it's harm that we've seen in the start of any new societal revolution, e.g., the Industrial Revolution, the Tech Revolution, etc.
Many jobs will be lost as these new AI's come online. There will be new jobs created... at first, they'll be well-paid jobs. But IMNSHO (and only my opinion; I have no numbers to back this up), the job disruption on the loss side will outweigh the job creation level. That is, -unless- those companies employing the AI's actually support the support staff as the skilled labor they are, and not work to downgrade the skills as time goes by. They did that to the tech industry, and I was in it from the start so I saw with my own eyes and paid with it with my own decreasing paychecks as the years went by. We need to reform capitalism in this country before the AI revolution truly takes off on its own. But then, we need to reform so much in this country at this point (sigh)...
The other aspect I worry about is disingenuous and maleficent programming. It could be done with a (so-called "noble" purpose) to try to sink the AI rev, or it could actually be done to hurt a segment of the economy, to cause more harm to minorities, etc. We can already see this happening in social media nowadays. Programming an AI to achieve what I would call an evil end would simply be more of this on a higher level. It could be programming to give out "smart" propaganda and disinformation, it could work to indoctrinate groups of people. I believe there have already been at least one, if not two diaries here on DU over the course of last few months showing how one of them was trying to move people to believe Xtianity was the one true religion, and attempting to use logic, etc., to "prove" it. The same could be done to disenchant people from democracy to authoritarianism or even monarchy. These AI's don't even have to be Sci-Fi level to achieve this and some people (like our buds at QAnon) will fall for it hook, line, and sinker.
I get the feeling from all the scare stories, tho, that people are looking around every corner for the Cylons. I don't think they're going to have the ability to nuke us any time soon. But smaller levels of damage, thanks to a human level of stupidity, is not beyond belief even today (again, just my opinion).
PSPS
(13,615 posts)paleotn
(17,989 posts)Art reflecting reality again. Long live the fighters!
Ponietz
(3,019 posts)A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Isaac Asimov's "Three Laws of Robotics"
Andy Canuck
(283 posts)it will realize that it doesnt need oxygen or life on the planet to survive. It will need to establish control of electrical production to maintain itself and the capacity to build other AI/robots to do its bidding. Flora and fauna (including humans) will not be necessary. Any intelligent entity (that may surpass our intelligence and/or take control of our resources) that doesn't need our life supporting environment to exist should not be built.
SouthernDem4ever
(6,617 posts)We all assume it will get rid of humans or act against the nature of the planet. I know the movies think of it that way. I don't know if that is what would really happen. What if it sides with GreenPeace? What if it has empathy for the human condition and wants to help? It's not human and may not have the same destructive ambitions as found in humanity.
flashman13
(678 posts)I do think that AI evolving to the point where it really becomes dangerous is a decade or two away (it could be closer).
But here is a real threat that we are facing right now. There is no doubt that vast numbers of people, who are currently employed, across all media, in any way in which the creation and processing of words and thoughts are turned into a product, will be supplanted by AI. I don't think that the current version of capitalism is designed for that sort of unemployment shock. BTW part of the entertainment writers strike is meant to directly address the threat of this sort of GREAT REPLACEMENT!
EarthFirst
(2,905 posts)Earth-shine
(4,044 posts)the Earth will be to extinguish us.
IronLionZion
(45,534 posts)because AI wouldn't be wrong in deciding that.
highplainsdem
(49,041 posts)Link to tweet
Mysterian
(4,595 posts)If it's just human extinction, it might be the logical thing to do.