The day may come when the AI scientists are working on will outsmart humans. Killer robots would enslave and then wipe out the entire population, MIT Professor Max Tegmark warns in a new TED 2018 conference.
No, this isn’t a script for a Terminator spinoff, though it very much sounds like one. Discussing AGI (Artificial General Intelligence) and the progress made in the field in recent years, Tegmark says humanity needs to consider the consequences of such a development before it actually happens.
To put it differently, before man creates a superintelligence, he needs to think how he wants it to be, what for and how to use it. Today, creating this superintelligence is not a matter of “if” but of “when.” If man goes into this next stage of evolution unprepared, the result will be something like in the Terminator film, with robots taking over from their less intelligent, weaker predecessors – humans.
Strangely enough, not few are the scientists who would welcome this stage in evolution, even though it would imply complete extinction of the human race. Others, would want to use it to serve their own interests.
“One option my colleagues would like to do is build super intelligence, and keep it under human control like an enslaved dog. But you might worry that maybe we humans just aren't smart enough to handle that much power. Also aside from any moral qualms you might have about enslaving superior minds, you should be more worried that maybe the superintelligence could outsmart us,” he says.
“They could break-out and take over. I have colleagues who are fine with this, and it could even cause human extinction.As long as they feel the AIs are our worthy descendants, like children. But how would we know the AIs have adopted our best values?” Tegmark adds.
And that’s what scientists should focus on: the AI they’re working on should share our values and not impose its own. AGI should serve to improve life for humanity, to make everyone richer, healthier and happier, and not function by its own separate algorithm.
This is the key to being empowered and not overpowered by technology, the Professor says. Unless mankind is prepared in advance for whatever developments may occur, the results have the potential to be catastrophic.
To put it differently, before man creates a superintelligence, he needs to think how he wants it to be, what for and how to use it. Today, creating this superintelligence is not a matter of “if” but of “when.” If man goes into this next stage of evolution unprepared, the result will be something like in the Terminator film, with robots taking over from their less intelligent, weaker predecessors – humans.
Strangely enough, not few are the scientists who would welcome this stage in evolution, even though it would imply complete extinction of the human race. Others, would want to use it to serve their own interests.
“One option my colleagues would like to do is build super intelligence, and keep it under human control like an enslaved dog. But you might worry that maybe we humans just aren't smart enough to handle that much power. Also aside from any moral qualms you might have about enslaving superior minds, you should be more worried that maybe the superintelligence could outsmart us,” he says.
“They could break-out and take over. I have colleagues who are fine with this, and it could even cause human extinction.As long as they feel the AIs are our worthy descendants, like children. But how would we know the AIs have adopted our best values?” Tegmark adds.
And that’s what scientists should focus on: the AI they’re working on should share our values and not impose its own. AGI should serve to improve life for humanity, to make everyone richer, healthier and happier, and not function by its own separate algorithm.
This is the key to being empowered and not overpowered by technology, the Professor says. Unless mankind is prepared in advance for whatever developments may occur, the results have the potential to be catastrophic.