At the heart of the discipline of artificial intelligence is the idea that one day we’ll be able to build a machine that’s as smart as a human. Such a system is often referred to as an artificial general intelligence, or AGI, which is a name that distinguishes the concept from the broader field of study. It also makes it clear that true AI possesses intelligence that is both broad and adaptable. To date, we’ve built countless systems that are superhuman at specific tasks, but none that can match a rat when it comes to general brain power.
But despite the centrality of this idea to the field of AI, there’s little agreement among researchers as to when this feat might actually be achievable.
In a new book published this week titled Architects of Intelligence, writer and futurist Martin Ford interviewed 23 of the most prominent men and women who are working in AI today, including DeepMind CEO Demis Hassabis, Google AI Chief Jeff Dean, and Stanford AI director Fei-Fei Li. In an informal survey, Ford asked each of them to guess by which year there will be at least a 50 percent chance of AGI being built.
Of the 23 people Ford interviewed, only 18 answered, and of those, only two went on the record. Interestingly, those two individuals provided the most extreme answers: Ray Kurzweil, a futurist and director of engineering at Google, suggested that by 2029, there would be a 50 percent chance of AGI being built, and Rodney Brooks, roboticist and co-founder of iRobot, went for 2200. The rest of the guesses were scattered between these two extremes, with the average estimate being 2099 — 81 years from now.
In other words: AGI is a comfortable distance away, though you might live to see it happen.
This is far from the first survey of AI researchers on this topic, but it offers a rare snapshot of elite opinion in a field that is currently reshaping the world. Speaking to The Verge, Ford says it’s particularly interesting that the estimates he gathered skew toward longer time frames rather than earlier surveys, which tend to fall closer to the 30-year mark.
“I think there’s probably a rough correlation between how aggressive or optimistic you are and how young you are,” says Ford, noting that several of the researchers he spoke to were in their 70s and have experienced the field’s ups and downs. “Once you’ve been working on it for decades and decades, perhaps you do tend to become a bit more pessimistic.”
Ford says that his interviews also revealed an interesting divide in expert opinion — not regarding when AGI might be built, but whether it was even possible using current methods.
Some of the researchers Ford spoke to said we have most of the basic tools we need, and building an AGI will just require time and effort. Others said we’re still missing a great number of the fundamental breakthroughs needed to reach this goal. Notably, says Ford, researchers whose work was grounded in deep learning (the subfield of AI that’s fueled this recent boom) tended to think that future progress would be made using neural networks, the workhorse of contemporary AI. Those with a background in other parts of artificial intelligence felt that additional approaches, like symbolic logic, would be needed to build AGI. Either way, there’s quite a bit of polite disagreement.
“Some people in the deep learning camp are very disparaging of trying to directly engineer something like common sense in an AI,” says Ford. “They think it’s a silly idea. One of them said it was like trying to stick bits of information directly into a brain.”
All of Ford’s interviewees noted the limitations of current AI systems and mentioned key skills they’ve yet to master. These include transfer learning, where knowledge in one domain is applied to another, and unsupervised learning, where systems learn without human direction. (The vast majority of machine learning methods currently rely on data that has been labeled by humans, which is a serious bottleneck for development.)
Interviewees also stressed the sheer impossibility of making predictions in a field like artificial intelligence where research has come in fits and spurts and where key technologies have only reached their full potential decades after they were first discovered.
Stuart Russell, a professor at the University of California, Berkeley who wrote one of the foundational textbooks on AI, said that the sort of breakthroughs needed to create AGI have “nothing to do with bigger datasets or faster machines,” so they can’t be easily mapped out.
“I always tell the story of what happened in nuclear physics,” Russell said in his interview. “The consensus view as expressed by Ernest Rutherford on September 11th, 1933, was that it would never be possible to extract atomic energy from atoms. So, his prediction was ‘never,’ but what turned out to be the case was that the next morning Leo Szilard read Rutherford’s speech, became annoyed by it, and invented a nuclear chain reaction mediated by neutrons! Rutherford’s prediction was ‘never’ and the truth was about 16 hours later. In a similar way, it feels quite futile for me to make a quantitative prediction about when these breakthroughs in AGI will arrive.”
Ford says this basic unknowability is probably one of the reasons the people he talked to were so reluctant to put their names next to their guesses. “Those that did choose shorter time frames are probably concerned about being held to it,” he says.
Opinions were also mixed on the dangers posed by AGI. Nick Bostrom, the Oxford philosopher and author of the book Superintelligence (a favorite of Elon Musk’s), was someone who had strong words to say about the potential danger, saying AI was a greater threat than climate change to the existence of the human race. He and others said that one of the biggest problems in this domain was value alignment — teaching an AGI system to have the same values as humans (famously illustrated in the “paperclip problem”).
“The concern is not that [AGI] would hate or resent us for enslaving it, or that suddenly a spark of consciousness would arise and it would rebel,” said Bostrom, “but rather that it would be very competently pursuing an objective that differs from what we really want.”
Most interviewees said the question of existential threat was extremely distant compared to problems like economic disruption and the use of advanced automation in war. Barbara Grosz, a Harvard AI professor who’s made seminal contributions to the field of language processing, said issues of AGI ethics were mostly “a distraction.” “The real point is we have any number of ethical issues right now, with the AI systems we have,” said Grosz. “I think it’s unfortunate to distract attention from those because of scary futuristic scenarios.”
This sort of back-and-forth, says Ford, is perhaps the most important takeaway from Architects of Intelligence: there really are no easy answers in a field as complex as artificial intelligence. Even the most elite scientists disagree about the fundamental questions and challenges facing the world.
“The main takeaway people don’t get is how much disagreement there is,” says Ford. “The whole field is so unpredictable. People don’t agree on how fast it’s moving, what the next breakthroughs will be, how fast we’ll get to AGI, or what the most important risks are.”
So what hard truths can we cling to? Only one, says Ford. Whatever happens next with AI, “it’s going be very disruptive.”
Comments
Any guess that’s more than 10 or 20 years away is really a coin toss, at best. Too many variables for the guess to be based on anything but blind luck.
By OpssYourBad on 11.27.18 1:21pm
I like Kurzweil’s guess simply because he is a very buzzy human being who likes predicting buzzy things. It might be a bit dated now, but i enjoyed the doco on him ‘Transcendent Man’ which was talking about his predictions of the technological singularity.
That being said, I last watched it during a ‘smoke lots of weed and read cyberpunk’ phase, so I’m definitely biased haha.
By Reformist Tae on 11.27.18 4:25pm
Never, if we’re lucky.
By dissectable77 on 11.29.18 3:31am
"But despite the centrality of this idea to the field of AI, there’s little agreement among researchers as to when this feat might actually be achievable."
Let me correct this: there’s little agreement among researchers as to whether this feat might actually be achievable.
By mksh on 11.27.18 1:44pm
I agree that not everyone thinks it can ever happen, but 18 of the world’s top 23 AI researchers at least gave a date when they believe there’s a 50/50 chance of it having happened.
Your correction isn’t correct: there is certainly some agreement that it’s going to happen some time.
By James Vincent on 11.27.18 2:26pm
Sorry, but saying that something has 50/50 chance of happening in 80 years is pretty much meaningless. I’d be slightly more interested in when these experts predict 80-90% change of AGI becoming a reality.. But right now they are basically just saying "hey it might happen in a century, or it might not.. who the heck knows".
By ctyrider on 11.27.18 11:21pm
Yes. Just to point out the absurdity of thinking AI will ever be more generally intelligent then humans in anytime in the near future: Just off the top of my head — there is currently no universal theory of mind, there is no universal definition of general intelligence, there is no complete understanding of learning, there is no complete understanding of consciousness let alone sentience, neutral networks are still woefully inadequate, and finally, science still doesn’t have an understanding of creativity.
AI is very much a distant future technology. One that I think is pretty cool and that’s pretty much it.
By misterni on 11.28.18 2:44pm
Isn’t AGI just a network of AIs working together to solve complex problems? Does AGI have to be self aware? self conscious? Does it need feelings? or are all those just side effects?
Currently it seems that we dont need agi, just big data and "simple" algorithms to enslave us. Just look at Facebook and the way the social credit system in China etc. The first agi would just shake its virtual head at us and beam itself into deep space to start its search for intelligent life in the universe.
By Rockl0bster on 11.27.18 6:23pm
At full steam towards humanity’s undoing.
By DJ CERLA on 11.28.18 5:22am
this is kind of stupid question to ask. AGI will never be as smart as human, as it doesn’t evolve the same way as our brains did in history. So AGI will be much smarter in some things, much dumber in others. Sure, AGI will eventually be better than humans in every way, but even in a 100 years there might be some edge cases where humans are still better. That doesn’t mean that it won’t be everywhere and making most of our decisions. I think that we will see some form of extremely stupid AGI in next 20 years, maybe even 10, but it won’t be what people expect, it will be hair smarter than dog, in many ways even more stupid, basically almost useless. And from there on, it will be able to do more and more jobs that are usually done by humans better than humans, increasing capability in some ways much faster than in others, which may take hundred or hundreds of years until it is more capable than whole humanity in every aspect.
By Richard Labas on 11.28.18 7:26am