See sama AI eest Nobeli saanud Hinton jt. spetsid on tõsiselt mures inimkonna saatuse pärast kui nii edasi läheb.
Stop AI believes that if we continue on our current path with artificial intelligence, it will lead to the extinction of mankind. An alarming number of experts agrees.
Among those who take this possibility seriously are Geoffrey Hinton, who won the Nobel prize in physics last year for his work on AI; Yoshua Bengio, the Turing Award winner; and the chief executives of OpenAI, Anthropic and Google DeepMind, all of whom signed an open letter that read: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” link
It could happen in any number of ways. For a truly super-intelligent AI, which regards humanity as a competitor, wiping us off the face of the planet would be a trivial feat.
According to Nate Soares, a former Google and Microsoft engineer who is now president of the Machine Intelligence Research Institute, our chance of extinction via AI is “at least 95%” if we continue on our current path. He compared our situation to driving towards a cliff at 100mph. “I’m not saying we can’t stop the car,” he said. “But we’re just going full steam toward the cliff.”
Right now, AI is in its infancy. The bots we use today are highly adept at handling specific cognitive tasks, such as crunching numbers on a spreadsheet or composing a well-written email. These are called “narrow AIs”.
Depending on whom you ask, within a year, a few years or a decade, machine intelligence will achieve artificial general intelligence, or AGI. This refers to the threshold at which AI matches human intelligence.
At this point the AIs’ capabilities will no longer be narrow. Instead of tackling just a single task at a time, they will be capable of solving complex problems that require long-term planning, goal-setting, judgment and reasoning across multiple fields of knowledge.
AGIs will have many more advantages over human beings. They won’t need to sleep or take breaks to eat. They also won’t have to spend years in school to achieve expertise. They will simply pass along their knowledge and skills to the next generation of AGIs.
Soon after, they will reach “artificial super intelligence”, or ASI. They will become capable of doing things humans can only dream of, such as curing cancer, achieving cold fusion or travelling to the stars. They will be like gods.
This is the utopia that AI enthusiasts cheer for. But that utopia rests on the idea that these gods will continue to follow our orders. Making sure this happens, it turns out, is an incredibly complicated technical challenge. In AI research, it’s called “alignment”.
Alignment is almost impossible to achieve, and here’s why: we have to anticipate how ASIs “think”, which is a bit like trying to anticipate how an advanced alien race would think. Even if we’re able to dictate rules to them, we can’t predict exactly how they will follow them. Another problem is that AIs can lie to us. Even in their current infancy, they do it all the time.
An AGI or an ASI would be capable of both long-term planning and deception. They could easily fool us into thinking they’re aligned with us when they really aren’t.
And we will have no way of discerning the truth. Already, many of the internal decision-making processes of AIs are inscrutable to humans. Soares said “we take an enormous amount of computing power and smash it against a truly enormous amount of data in a way that shapes the computers somehow. No one knows what goes on inside those things.” And as they advance, ASIs could communicate in a new language we don’t understand.
AIs seem already to be developing strange autonomous preferences and shady ways of fulfilling them. Grok AI briefly started ad-libbing antisemitic slurs and spontaneously praising Hitler. Bing’s AI tried to break up a New York Times journalist’s marriage. “We’re now starting to see the very beginning of warning signs,” Soares said. “You make them smart enough, it’s not going to be pretty.”
Holly Elmore is the executive director of PauseAI, a more moderate version of Stop AI. Elmore is less certain than Soares that our current path will lead to human extinction. Instead of 95%, she puts the odds of extinction — called “p(doom)” in AI circles — at 15-20%. This is in line with many AI engineers. Musk also has a p(doom) of about 20. These are considered optimistic predictions.
Elmore believes AI will diminish our lives. “It’s a threat to human self-determination,” she said. Her fears reflect those in a paper called “Gradual Disempowerment” link, written by AI scholars from various universities and think tanks, which anticipates the risks of a society replaced by intelligent machines. “Imagine a scenario where all the humans are basically living on dump sites,” said Katja Grace, the co-founder of the research group, AI Impacts, describing a world controlled by AI. We wouldn’t have any political or economic power and we wouldn’t be able to understand what was going on, she added.
Elmore said AI proponents are not resisting a pause for technical or even political reasons. It’s more like religious faith. One adherent once told her he will never die because he believes AI will immortalise his consciousness. For some, giving up on AI means giving up on eternal life. “There’s a lot of hope for heaven,” Elmore said. Even if many experts warn that hell is a lot more likely. -the times