Why Elon Musk Is Sounding the Alarm on Artificial Intelligence
“AI is a fundamental risk to the existence of human civilization.”
KEVIN DRUM
Elon Musk is a household name. The South African-born billionaire can seemingly pioneer anything: PayPal, Tesla, SpaceX, and (maybe) the Hyperloop. He’s an engineer and a marketer, Steve Jobs and Steve Wozniak rolled into one. And he’s always great for a quote because he’s photogenic, telegenic, and technogenic.
But there’s one technology he’s deeply scared of: artificial intelligence. “AI is a fundamental risk to the existence of human civilization,” he warned a meeting of the nation’s governors earlier this year. “I have access to the very most cutting-edge AI, and I think people should be really concerned about it.” He has also warned that Google is creating “a fleet of artificial-intelligence-enhanced robots capable of destroying mankind.”
Musk is not alone. Bill Gates, Stephen Hawking, and various AI experts have also sounded the alarm.
“AI is a fundamental risk to the existence of human civilization.”
Why are pillars of the tech community so concerned? Consider: If you truly believe that human-level AI is coming soon—as Musk does, and as you should, too—it’s pretty obvious what comes next: above human-level AI. After all, why should progress stop just because we achieve that arbitrary goal? It won’t. Once AI hits human level, it will develop new improvements all on its own.
How would this work? Well, suppose we build a computer that plays chess—not just any old computer, but a superintelligent AI computer that learns as it plays and gets better and better. What would it do? At its most extreme, this scenario devolves into what futurists call “the singularity.” Because computers are fundamentally faster than human brains, every new increase in AI capability will happen in less and less time, leading very quickly to AI that’s fantastically more intelligent than humans. At that point, AI will be as incomprehensible to us as an adult is to a one-year-old—and if it decides to do something that harms us, we’ll have as little chance of fighting back.
It would play chess, and its sole motivation would be improving its chess game. It wouldn’t hate humans. But neither would it love humans or feel any loyalty to them. It just wouldn’t care about us. All it would care about is playing better chess.
Very quickly it could decide that it needed to build a more powerful computer if it wanted to keep improving. So that’s what it would do. The entire planet would be nothing except raw material to build more and more computing power, and our chess bot would devour it. So much for the human race.
This sounds insane. But the chess thing is just a quirky way of explaining the broader problem: namely that a digital superintelligence will inevitably develop a mind of its own. The chess bot wouldn’t mindlessly play chess forever. After all, it’s superintelligent. Like any other AI, no matter how we’ve initially programmed it, it will pretty quickly figure out how to alter our programming and formulate its own goals. And while we’ll probably never know what those goals are—and couldn’t understand them if we did—they’re pretty likely to include a desire for more and more computing power. The end result for humanity is the same regardless of whether the goal is chess or discovering the mysteries of the universe.
This fear has prompted the famously libertarian Musk to do the unthinkable: support more government regulation. “I’m against overregulation, for sure,” Musk emphasized. “But man, I think we’ve got to get on that with AI, pronto.” He and Hawking also think we should start up colonies on other planets as a bolt-hole. But neither plan is likely to work. Someone is eventually going to build a superintelligent AI. As for Mars, the technology for a self-sustaining colony is pretty far off. And wouldn’t a super-AI just follow us there?
Compared with this Terminator scenario, a few decades of mass unemployment and misery at the hands of AI robots and their zillionaire owners seem like small potatoes. The difference is that the AI jobocalypse is coming soon, and if we start now, we can keep greedy zillionaires from reaping all the rewards of it. By contrast, remorseless super-AI is still pretty speculative, and there’s not much we can do about it. So as scary as Musk and Gates and Hawking find it, we’re still probably better off focusing on the end of work, rather than the end of humanity.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.