Richard Bright: Can we begin by you saying something about your background?
Yoshua Bengio: I was trained as a computer scientist. I got my PhD in 1991. I did post docs in the USA at MIT and Bell Labs, then I took up a position here at the University of Montreal in 1993. I’ve been building up a group here which has been focussing on neural networks – now called Deep Learning – that has become the largest deep learning academic group in the world. Many things have happened over the last decade as this technology has unfolded.
RB: Your work includes research around deep learning architectures. What is the relationship between Machine Learning and Deep Learning?
YB: Actually, to answer this I am going to draw three circles. There is AI, inside of which is Machine Learning, inside of which in Deep Learning. So, AI is the quest for algorithms to build intelligent machines and Machine Learning is a particular approach to do that which relies on machines learning from data and examples so that knowledge is acquired mostly by observing and interacting with the world. Deep Learning is a particular approach to Machine Learning which follows up on decades of work on neural nets, which was inspired by the things we know about brains. It gets its name because of a focus on representation and multiple levels on representation as a core ingredient in those learning systems.
RB: And common to both is experience?
YB: Common to both Deep Learning and Machine Learning is experience, observations or data that the machine is learning from. So, you have the word ‘learning’ because the machine gets born with very little knowledge and then it acquires more knowledge by observing, either passively or actively, as when you think about a robot which does things and observes the consequences.
RB: How can a machine ‘learn’ without human input?
YB: Well, just like a mouse learns without human input. Many animals learn in that way, with a culture which is very weak. We have a very rich culture as humans and so, of course, our learning is driven a lot by that. That said, a lot of the learning that we do in machine learning is also driven by human input. In fact, the most successful machine learning or deep learning to date is what is called ‘supervised learning’ where it is very much driven by human guidance. Imagine that you had to take your baby in hand and that for every decision or muscle movement that the baby has to take you would have a parent telling the baby what to do as to what the right action would be. So that’s supervised learning, this is the way we are solving problems these days. Of course, it’s far from the human ability, in terms of autonomy: humans, mammals and birds can learn in a much more autonomous way, they are very powerful learners.
RB: The February issue aims to explore what it means (and will mean) to be human in the age of Artificial Intelligence. It would be good to start with how we define intelligence, both in terms of human intelligence and our growing understanding of animal intelligence. It is obviously a complex, multi-faceted entity, (and maybe difficult to answer?) but how would you define intelligence?
YB: There’s a lot of confusion about the meaning of intelligence and thus the meaning of artificial intelligence. Intelligence, in the technical circles, has to do with the ability to make good decisions, even in an environment which changes or in an environment of which we know very little at the onset. For this, we have to learn and adapt, but it’s about taking the decisions, and in order to take good decisions an intelligent agent needs knowledge. It can get knowledge in many different ways but, as I said earlier, machine learning is about acquiring knowledge. So, with that definition what you see is that, even an ant is intelligent, even a bee is intelligent. It’s just that the set of things that it can do right is different and smaller than the set of things that a human can do well. And so, we already have intelligent machines, they’re just not as intelligent as us. They can outperform us on a few things and are totally ignorant of most of the things that humans are able to do.
When we talk about human-level intelligence, in terms of AI and machines, we mean a level of intelligence that is comparable with that of humans both in its strength but also in its scope. In other words, we can understand many different aspects of the world and then we can use that knowledge to do many different tasks across all those aspects of the world. Of course, we are far from that with AI but we actually made a lot of progress since the beginning of AI research.
RB: How does, or how can, machine intelligence enhance human intelligence?
YB: It’s already doing it. When you are using Google or other search engines this is extending your own intelligence. Everybody now who has access to the internet uses these tools multiple times a day, just to find information. Even just using a laptop to find information about all the exchanges you have had with other people by e-mail is extending your intelligence. There are many ways in which technology has been extending humans, what’s different with AI and computers is that it’s mostly extending our cognitive abilities, whereas previous industrial revolutions brought broad extensions of our speed, our muscle power, or our ability to fly when we couldn’t before. So, now it’s our ability to think, to solve problems at an intellectual level that computers and AI are extending.
RB: Do you think that, following the work being done with Evolutionary AI, our views on what is intelligence might change?
YB: You mean using evolutionary algorithms?
YB: So, that’s one approach, but it hasn’t been particularly successful. It falls under a more general pattern, which includes both evolution and learning, where there is a gradual improvement. You have a candidate solution (a learning agent or a population) and it gradually gets better and gets to know more things or do things better in some sense. That is handled with the mathematics of optimisation, whether you’re dealing with evolutionary learning or dealing with machine learning. That’s a very central in AI and in machine learning. The only problem with evolutionary methods is that they are very slow for the kind of computing resources we currently have. Of course, if you can have a billion individuals on the planet, each trying different configuration of genes, then that is very efficient, but if you consider a single computer, or even a single brain, then evolutionary methods don’t seem to be sufficiently efficient.
RB: So, do you think that if it was a single computer that it would take a long time, but if they were connected then that would speed up the process?
YB: Yes, if you had a million computers connected then I think evolutionary methods would be an interesting tool. The same way that we learn as individuals but also we also learn as a group. Actually, we learn in two ways as a group, we learn through the evolution of our genes, that’s very slow but that’s what our species is doing, so the group of humans with all their genes is evolving, and there is an optimisation leading to better genes. A much faster kind of evolution is happening through cultural evolution, where we are sharing the information about what we learn through culture rather than through our genes. That’s also the process of science, by the way. All of these things are interesting but unless you have access to a very large number of computing machines it might be a different kind of optimisation, it might be more of the kind you have in your individual brain.
RB: How would you characterize your view on consciousness?
YB: One problem with this word is that it means different things to different people. I like the definition in Wikipedia which talks about different aspects of consciousness and one of the aspects I find most interesting is related to attention (“being aware of an external object or something within oneself”). So, there are things that we’re conscious of, that come to our mind. This is something I am studying in my research, how do we build neural networks that have the equivalent of this attentive consciousness that brings pieces of knowledge and pieces of our recent experience to a special place, which is our consciousness, so we can use those pieces in a privileged way in order to decide the next thing we’re going to be doing. So that’s one aspect of consciousness, another aspect is self-knowledge, or self-consciousness. You have different degrees of self-knowledge, even a very simple robot which knows its position has a self-consciousness. It is not necessarily something magical or new, we can build that in machines. There are other things that people associate with consciousness, such as qualia for example, which are about subjective impressions we get from our perceptions.
RB: A super-intelligent AI could register information and solve problems and that would far exceed even the brightest human minds, but being made of a different substrate (eg. silicon), would it have conscious experience? Or would it only be able to attain certain aspects of consciousness, such as, for example, learning and memory?
YB: So, it’s going to be our design choice. If you think about a search engine as an example, you can imagine having a very intelligent search engine that knows all of the knowledge of the world and can give you answers to questions but doesn’t have any self-consciousness. It’s really a machine that mechanically gets information and can answer questions like an oracle. So you could have AI without consciousness. It’s totally conceivable for me that we could have AI without consciousness and we will build machines that don’t have any more consciousness than a toaster and yet are very smart.
That’s the example of the search engine. The other example would be if we build a robot that has to function in the world, then it’s going to have to have some kind of self-consciousness. It needs such a self-consciousness in order to act in the world in a way that takes into account its state and role in its environment. It needs to know that it exists and can act. Presumably we will have machines that are intelligent and don’t have consciousness, as well as machines that are intelligent and do have consciousness, depending on their intended use.
RB: Given what we know about our own minds, can we expect to intentionally create artificial consciousness? Or will AI become conscious by itself?
Get the Full Experience
Read the rest of this article, and view all articles in full from just £10 for 3 months.