The original vision of the pioneers of artificial intelligence was to create human-level AI, machines that could understand, reason, and act like humans. But six decades of research have proven that artificial general intelligence (AGI) is a very tough nut to crack.
The field has seen tremendous progress, especially with an explosion of innovation in deep learning and neural networks in recent years. But we also face many fundamental questions: What is the path towards AGI? What are the capabilities and limits of current AI technologies? How do we know that we’ve achieved AGI? And how far are we from human-level AI?
With all the hype and confusion surrounding AI, it’s difficult to answer those questions. Our AI models can beat human professionals at the most complicated games. But at the same time, they can’t replicate some of the simplest cognitive functions of humans.
Author and futurist Martin Ford has done a great job of answering these questions in his book Architects of Intelligence: The Truth about AI from the People Building it.
Ford’s book is a compilation of interviews with 23 leading AI scientists and experts. It discusses, among other things, the current state of AI and the path to artificial general intelligence.
Architects of Intelligence answers many of the fundamental questions, and like everything AI, leaves us with many more.
Deep learning is here to stay
With deep learning being the cutting edge of AI, scientists are divided over the extent of its capabilities and limits. But Ford’s interviews show that AI experts agree that deep learning will be crucial to reach artificial general intelligence.
“The scientific concepts that are behind deep learning and the years of progress made in this field, means that for the most part, many of the concepts behind deep learning and neural networks are here to stay. Simply put, they are incredibly powerful. In fact, they are probably going to help us better understand how animal and human brains learn complex things,” says Yoshua Bengio, computer science professor at the University of Montreal.
Architects of Intelligence: The truth about AI from the people building it
Architects of Intelligence: The truth about AI from the people building it
Bengio’s remark is a kind of expectable, given that he is one of the pioneers of deep learning. But deep learning also gains appraise from its critics such as neuroscientist and AI expert Gary Marcus.
“I see deep learning as a useful tool for doing pattern classification, which is one problem that any intelligent agent needs to do. We should either keep it around for that, or replace it with something that does similar work more efficiently, which I do think is possible,” Marcus says.
Architects of Intelligence also discusses many fields that have benefited from advances in deep learning, including computer vision and natural language processing.
The challenges of deep learning
Meanwhile, the interviewed scientists also acknowledge that current technologies have some hurdles to overcome if we want to achieve human-level AI.
“The success today of neural networks and deep learning mostly involve supervised pattern recognition, which means that it’s a very narrow sliver of capabilities compared to general human intelligence,” says Fei-Fei Li, professor of computer science at Stanford and chief scientist at Google Cloud. Other scientists interviewed in Architects of Intelligence echo those remarks, including Yann LeCun, another deep learning pioneer.
Supervised learning is the process of creating AI models by training them on lots of labeled examples. While supervised learning helps solve many problems in AI, it also poses some challenges. In many domains, labeled data is scarce or requires extensive human efforts.
Many of Ford’s interviews discuss the challenges of current AI, including its application to narrow domains, its overreliance on data, and its limited understanding of the meaning of language. His interview with Gary Marcus and Oren Etzioni, CEO of Allen Institute for Artificial Intelligence, dig deep into these challenges and what’s preventing deep learning from solving problems that are easy to tackle for a human child.
“I think the reality is that deep learning and neural networks are particularly nice tools in our toolbox, but it’s a tool that still leaves us with a number of problems like reasoning, background knowledge, common sense, and many others largely unsolved,” Etzioni says.
Is hybrid AI the right path to human-level intelligence?
Several of the experts Ford interviewed in Architects of Intelligence believe that the combination of neural networks and classic, rule-based AI will help overcome the limits of deep learning.
“On balance, there’s been a shift from traditional tools toward deep learning, especially when you have a lot of data, but there are still plenty of problems in the world where you have only small datasets, and then the skill is in designing the hybrid and getting the right mix of techniques,” says Andrew Ng, adjunct professor of computer science at Stanford University, co-founder of Google Brain, and former chief scientist at Baidu.
“Humans have all kinds of common-sense reasoning, and that has to be part of the solution. It’s not well captured by deep learning. In my view, we need to bring together symbol manipulation, which has a strong history in AI, with deep learning. They have been treated separately for too long, and it’s time to bring them together,” Marcus says.
And Joshua Tenenbaum, professor of computational cognitive science at MIT, posits that we must combine the achievements from symbolic AI, probabilistic and causal models, and neural networks to solve the challenges of deep learning.
“Each of these ideas has had their rise and fall, with each one contributing something, but neural networks have really had their biggest successes in the last few years. I’ve been interested in how we bring these ideas together. How do we combine the best of these ideas to build frameworks and languages for intelligent systems and for understanding human intelligence?” Tenenbaum says.
Tenenbaum recently headed a team of researchers who developed the Neuro-symbolic Concept Learner. The NSCL is a hybrid AI model that combines neural nets and symbolic AI to solve problems. The results of the researchers’ work show that NSCL can learn new tasks with much less data than pure neural network–based models require. Hybrid AI models are also explainable as opposed to being opaque black boxes.
But not everyone is a fan of hybrid AI models.
“Note that your brain is all neural networks. We have to come up with different architectures and different training frameworks that can do the kinds of things that classical AI was trying to do, like reasoning, inferring an explanation for what you’re seeing and planning,” Bengio says.
Geoffrey Hinton, another deep learning pioneer, is also critical toward hybrid approaches. In his interview with Ford, he compares hybrid AI to combining electric motors and internal combustion engines. “That’s how people in conventional AI are thinking. They have to admit that deep learning is doing amazing things, and they want to use deep learning as a kind of low-level servant to provide them with what they need to make their symbolic reasoning work,” Hinton says. “It’s just an attempt to hang on to the view they already have, without really comprehending that they’re being swept away.”
How do we know we’ve achieved general AI?
Since the time of Alan Turing, the father of modern computer science, the “imitation game,” which later became known as the Turing Test, has been the principal benchmark of determining whether we’ve developed “thinking machines.” The idea behind the Turing Test is that an AI—say a chatbot—must be able to fool humans into thinking it is a human.
While there’s a lot of debate over whether the Turing Test is a real measure of advances in AI, most scientists agree that language understanding is an essential part of any real intelligence system.
“[Deep-learning based natural-language systems] systems are really good at statistical learning, pattern recognition and large-scale data analysis, but they don’t go below the surface,” says Barbara Grosz, Higgins Professor of Natural Sciences at Harvard University. “They can’t reason about the purposes behind what someone says. Put another way, they ignore the intentional structure component of dialogue. Deep-learning based systems more generally lack other hallmarks of intelligence: they cannot do counterfactual reasoning or common-sense reasoning.”
Etzioni says that one of the essential stepping stones toward artificial general intelligence would be to develop AI programs that can handle multiple tasks. “An AI program that’s able to both do language and vision, it’s able to play board games and cross the street, it’s able to walk and chew gum. Yes, that is a joke, but I think it is important for AI to have the ability to do much more complex things,” he says.
Presently, every task needs a separate AI, and efforts to create generalized AI models have had limited success.
James Manyika, the Chairman and Director of McKinsey Global Institute, makes a fun proposition. “Until you get a system that can enter an average and previously unknown American home and somehow figure out how to make a cup of coffee, we’ve not solved AGI,” he says.
While it might sound silly, Manyika’s proposition is a very serious test of the general problem–solving capabilities of AI. Marcus discusses why simple tasks such as Manyika’s coffee challenge are tough for current blends of AI.
And of course, most scientists agree that real artificial intelligence shouldn’t need so much labeled data to learn. “In order to get really highly effective machine intelligent systems, we also need algorithms that can make more use of unsupervised and unlabeled data. As humans, we tend to organize a lot of our world knowledge in causal terms, and that’s something that is not really done much by current neural networks,” says Nick Bostrom, professor at the University of Oxford and the write of New York Time bestseller Superintelligence: Paths, Dangers, Strategies.
In the end, Architects of Intelligence reminds us that we still have a lot to learn about intelligence, many more questions to answer, and some questions to discover.
To quote Etzioni: “People see these amazing achievements, like a program that beats people in Go and they say, ‘Wow! Intelligence must be around the corner.’ But when you get to these more nuanced things like natural language, or reasoning over knowledge, it turns out that we don’t even know, in some sense, the right questions to ask.”