AI and Machine Learning

Thomas Dietterich is one of the pioneers of the field of Machine Learning. His research is motivated by challenging real world problems with a special focus on ecological science, ecosystem management, and sustainable development. He is Past President of the Association for the Advancement of Artificial Intelligence, and he previously served as the founding president of the International Machine Learning Society. In this exclusive interview he discusses his ideas and work.

Thomas Dietterich

Richard Bright: Can we begin by you saying something about your background?

Thomas Dietterich: I have worked on machine learning since the fall of 1977 when I started as a graduate student at the University of Illinois under the direction of Prof. Ryszard Michalski. Michalski, along with Tom Mitchell and Jaime Carbonell, launched the field of machine learning by organizing the first Machine Learning workshop in 1980 at Carnegie Mellon University. Michalski, Mitchell, and Carbonell published three books of collected papers in the subsequent decade. I’ve worked on a wide variety of machine learning problems and applications during my career including drug design, text-to-speech generation, desktop AI assistants, cybersecurity, and many topics in ecology and ecosystem management such as modelling grasshopper infestations, invasive species, and wildfire. In addition to my research, I have worked hard to nurture the field of machine learning. I served 6 years as Executive Editor of the first journal, Machine Learning, and I later co-founded the Journal of Machine Learning Research. I was the founding president of the International Machine Learning Society (which operates the International Conference on Machine Learning), and I recently completed a term as the President of the Association for the Advancement of Artificial Intelligence.

RB: The February issue aims to explore what it means (and will mean) to be human in the age of Artificial Intelligence. It would be good to start with how we define intelligence, both in terms of human intelligence and our growing understanding of animal intelligence. It is obviously a complex, multi-faceted entity, (and maybe difficult to answer?) but how would you define intelligence?

TD: In AI, we define intelligence along a scale from narrow/stupid to broad/knowledgeable. We describe a “system” (which could be a device, a computer, a person, an organization, etc.) as intelligent if it is useful to view the system as taking actions to achieve its goals (and succeeding most of the time). At the trivial end, the behaviour of a thermostat can be usefully predicted by ascribing to it the goal of keeping the temperature of the room near its set point. It achieves this goal by turning the furnace on and off. At the other end of the spectrum, it is useful to attribute to Google the goal of correctly answering the questions we pose to it (although it fails quite often). To be more precise, for most queries Google’s goal is to find a web page where we can read the answer. Humans have very broad intelligence, because they can successfully achieve an infinite variety of goals based on their knowledge of human beings, society, and the world. In comparison, while Google is quite broad, it is only marginally competent.

RB: How does, and how can, machine intelligence enhance human intelligence?

TD: The modern search engines enhance my intelligence. I can achieve many more goals if I have access to a search engine. I can access written information (e.g., Wikipedia), “how-to” videos (e.g., YouTube), read newspaper stories in Chinese, and so on. I can also use machine intelligence as a personal assistant (e.g., to plan a driving route via google maps or recommend a restaurant or hotel). I can also use computers as an extension of my memory, although this is currently very mundane. For example, I can tell Siri to remind me about an appointment, and I can look up the names of people in my Contacts list. These capabilities compensate for shortcomings in human intelligence, especially as I get older!

I would love to have augmented reality glasses that would remind me of peoples’ names by recognizing their faces and tell me if I owe them a reply to an email message they have sent me.

RB: Following the work being done with AI evolutionary algorithms do you think our views on what is intelligence might change?

TD: Yes. Right now, we talk a lot about “Human-Level AI” as if that is the only kind of intelligence—and as if it is the pinnacle of intelligence. But AI systems already have strengths and weaknesses that are different from human strengths and weaknesses. There exist theorem-proving AI systems that can prove mathematical theorems that people cannot prove. This is a form of superhuman intelligence (in a very narrow area). It raises a new challenge, which is how can we understand and trust these forms of intelligence? Chess players have learned a lot by studying the behaviour of chess programs. Nowadays Go players are studying AlphaGoZero and learning from it.

RB: How can we make computer systems that adapt and learn from their experience?

TD: We have two main ways that we create machine learning systems. One is appropriate when we can view the computer as analysing some input (e.g., a sentence in Chinese) and producing some output (e.g., the translation of that sentence into English). We write a program (in the form of a deep neural network) that is controlled by numerical parameters. We feed the program input-to-output pairs (Chinese sentences and their translations) and define a “score” to evaluate how well the program is able to output the right answers. A learning algorithm adjusts the numerical parameters of the deep neural network to improve the score. The second method is to define a mathematical model (usually, a probabilistic model) that relates inputs and outputs. For example, in medical diagnosis, the physician makes a series of observations and lab tests and then tries to infer what disease is causing my symptoms and what therapy might resolve the problem. To teach a system to do disease diagnosis, we construct a mathematical model that relates all known diseases to all known symptoms (and lab tests). From historical data, the model can learn the probability of each disease (some are rare, some are common) and the probability of observing each symptom given the disease. Such a model can answer a wide variety of questions including things like: “What is the probability of observing a fever greater than 105 degrees for a patient with measles?” “If I observe a fever of 106 degrees and red spots on the skin, what is the probability the patient has measles?” It can even answer questions like “If I see red spots on the skin, what is the probability that the patient has a fever of at least 105 degrees?” All of these probabilities can be estimated from previous “experience” in the form of health records.

RB: Technical progress in machine learning is rapidly expanding, developing more complex capabilities. Deep Learning has been a focus of much of the excitement in this field. How do you see machine learning progressing and what do you consider to be its important applications?

TD: Most current applications of machine learning involve things like analysing purchase histories to recommend advertisements, analysing payment records to detect potential fraud, and so on. None of these use deep learning. Instead, they operate on traditional databases and use existing methods such as decision trees and support vector machines.

However, these existing methods do not work well on images, speech, music, language, and other kinds of perceptual data. This is where deep learning works well. We now have AI systems that can recognize objects and faces in images and video, understand and translate speech, recognize and synthesize music, and so on.

Another exciting direction in machine learning is the development of algorithms for sequential decision making. Detecting a face is a “one shot” task: the computer inputs the image and draws a box around the face in the image. But driving a car, controlling a robot, or playing chess involves making a long sequence of decisions. We are seeing many exciting new developments in this area.

RB: A super-intelligent AI could register information and solve problems and that would far exceed even the brightest human minds, but being made of a different substrate (eg. silicon), would it have conscious experience?

TD: This is a tricky question because we don’t really know the causal basis or the functional role of conscious experience. It is clear that AI systems benefit from having the ability to monitor their own behaviour to check, for example, whether actions have failed or had undesirable side-effects. Is this kind of self-awareness the same as conscious experience? I don’t know.

RB: Could we build a machine with consciousness?

----------------------------------------------------

The rest of this article is reserved for members only. If you have a subscription, please sign in here. Otherwise, why not Subscribe today?

………………..

Thomas Dietterich homepage

 

Get the Full Experience
Read the rest of this article, and view all articles in full from just £10 for 3 months.

Subscribe Today

, , , , ,

No comments yet.

You must be a subscriber and logged in to leave a comment. Users of a Site License are unable to comment.

Log in Now | Subscribe Today