About this Episode
Episode 90 of Voices in AI features Byron speaking with Norman Sadeh from Carnegie Mellon University about the nature of intelligence and how AI effects our privacy.
Listen to this episode or read the full transcript at www.VoicesinAI.com
Byron Reese: This is Voices in AI brought to you by GigaOm Iâm Byron Reese, today my guest is Norman Sadeh. He is a professor at Carnegie Mellon School of Computer Science. Heâs affiliated with Cylab which is well known for their seminal work in AI planning and scheduling, and he is an authority on computer privacy. Welcome to the show.
Carnegie Mellon has this amazing reputation in the AI world. Itâs arguably second to none. There are a few university campuses that seem to reallyâ¦ thereâs Toronto and MIT, and in Carnegie Mellonâs case, how did AI become such a central focus?
Norman Sadeh: Well, this is one of the birthplaces of AI, and so the people who founded our computer science department included Herbert Simon and Allen Newell who are viewed as two of the four founders of AI. And so they contributed to the early research in that space. They helped frame many of the problems that people are still working on today, and they helped recruit also many more faculty over the years that have contributed to making Carnegie Mellon as the place that many people refer to as being the number one place in AI here in the US.
Not to say that there are not other many good places out there, but CMU is clearly a place where a lot of the leading research has been conducted over the years, whether you are looking at autonomous vehicles â for instance, I remember when I came here to do my PhD back in 1997, there was research going on autonomous vehicles. Obviously the vehicles were a lot clumsier than they are today, not moving quite as fast, but thereâs a very, very long history of AI research, here at Carnegie Mellon. The same is true for language technology, the same is true for robotics, you name it. There are lots and lots of people here who are doing truly amazing things.
When I stop and think about [how] 99.9% of the money spent in AI is for so-called Narrow AIâtrying to solve a specific problem often using machine learning. But the thing that gets written about and is shown in science fiction is âgeneral intelligenceâ which is a much more problematic topic. And when I stop to think about whoâs actually working on general intelligence, I donât actually get too many names. Thereâs OpenAI, Google, but I often hear you guys mentioned: Carnegie Mellon. Would you say there are people in a serious way thinking about how do you solve for general intelligence?
Absolutely. And so going back to our founders again, Allen Newell was one of the first people to develop what you referred to as a general theory of cognition, and obviously that theory has evolved quite a bit, and it didnât include anything like neural networks. But thereâs been a long history of efforts on working on general AI here at CMU.
And youâre completely true, that as an applied [science] university also, weâve learned that just working on these long-term goals is not necessarily the easiest way to secure funding, and that it really pays to also have shorter term objectives along the way, things that can solve the accomplishments that can help motivate more funding coming your way. And so, it is absolutely correct that many of the AI efforts that youâre going to find, and thatâs also true at Carnegie Mellon, will be focused on more narrow types of problems, problems where weâre likely to be able to make a difference in the short to mid-term, rather than just focusing on these longer and loftier goals of building general AI. But we do have a lot of researchers also working on this broader vision of general AI.
And if you were a betting man and somebody said âDo you believe that general intelligence is kind of an evolutionary [thing]â¦ that basically the techniques we have for Narrow AI, theyâre going to get better and better and better, and bigger datasets, and weâre going to get smarter, and that itâs gradually going to become a general intelligence?â
Or are you of the opinion that general intelligence is something completely different than what weâre doing nowâand what weâre doing now is just like simulated intelligenceâwe just kind of fake it (because itâs so narrow) into tasks? Do you think general AI is a completely different thing or it will gradually get to it with the techniques we have?
So AI has become such a broad field that itâs very hard to answer this question in one sentence. You have techniques that have come out under the umbrella of AI that are highly specialized and that are not terribly likely, I believe, to contribute to a general theory of AI. And then you have I think, broader techniques that are more likely to contribute to developing this higher level of functionality that you might refer to as âgeneral AI.â
And so, I would certainly think that a lot of the work that has been done in deep learning, neural networks, those types of things are likely over time with obviously a number of additional developments that people have, a number of additional inventions that people have to come up with, but I would imagine that has a much better chance of getting us there than perhaps more narrow, yet equally useful technologies that might have been developed in fields like scheduling and perhaps planning and perhaps other areas of that type where thereâs been amazing contributions, but itâs not clear how those contributions will necessarily lead to a general AI over the years. So mixed answer, but hopefullyâ¦
You just made passing reference to âAI means so many things and itâs such a broad term that may not even be terribly useful,â and that comes from the fact that intelligence is something that doesnât have a consensus definition. So nobody agrees on what intelligence is. Is that meaningful? Why is it that something so intrinsic to humans: intelligence, we donât even agree on what it is? What does that mean to you?
Well, itâs fascinating, isnât it, that there used to be this joke and maybe itâs still around today, that AI was whatever it is that you could not solve, and as soon as you would solve it, it was no longer viewed as being AI. So in the â60s, for instance, there was this program that people still often talk about called Elizaâ¦
Right, exactly, simple Rogerian therapist, basically a collection of rules that was very good at sounding like a human being. Effectively what it was doing is, it was paraphrasing what we would tell you and say, âwell, why do you think that?â And it was realistic enough to convince people that they were talking to a human being, while in fact they were just talking to a computer program. And so, if you had asked people who had been fooled by the system, whether they were really dealing with AI, they would have told you, âyes, this has to be AI.â
Obviously we no longer believe in that today, and we place the bar a lot higher when it comes to AI. But there is still that tendency to think that somehow intelligence cannot be reproduced, and surely if you can get some kind of computer or whatever sort of computer you might be talking about to emulate that sort of functionality and to produce that sort of functionality, then surely this cannot be intelligence, itâs got to be some kind of a trick. But obviously, if you also look over the years, weâve gotten computers to do all sorts of tasks that we thought perhaps were going to be beyond the reach of these computers.
And so, I think weâre making progress towards emulating many of the activities that would traditionally be viewed as being part of human intelligence. And yet, as you pointed out, I think at the beginning, there is a lot more to be done. So common sense reasoning, general intelligence, those are the more elusive tasks just because of the diversity of â the diverse facility that you need to exhibit in order to truly be able to reproduce that functionality in a scalable and general manner, and thatâs obviously the big challenge for research in AI over the years to come.
Are we going to get there or not? I think that eventually we will. How long itâs going to take us to get there? I wouldnât dare to predict, but I think that at some point we will get there, at some point we will likely build â and weâve already done that in some fields, we will likely build functionality that exceeds the capability of human beings. Weâve done that with facial recognition, weâve done that with chess, weâve done that actually in a number of different sectors. We might very well have done that â weâre not quite there, but we might very well at some point get that in the area of autonomous driving as well.
So you mentioned common sense, and itâs true that every Turing test capable chatbot I come across, I ask the same question which is, âWhatâs bigger, a nickel or the Sun?â And Iâve never had one that could answer it. Because nickel is ambiguousâ¦ That seems to a human to be a very simple question, and yet it turns out, it isnât. Why is that?
And I think at the Allen Institute, theyâre working on common sense and trying to get AI to pass like 5th grade science tests, but why is that? What is it that humans can do that we havenât figured out how to get machines to do that enables us to have common sense and them not to?
Right. So these are, amazingly enough, when people started working in AI, they saw that the toughest tasks for computers to solve would be tasks such as doing math or playing a game of chess. And they thought that the easiest ones would be the sorts of things that kids, five-year-olds or seven-year-olds are able to do. It turned out to be the opposite, it turned out that the kinds of tasks that a five-year-old or a seven-year-old can do are still the tasks that are eluding computers today.
And a big part of that is common sense reasoning, and thatâs the state of the art today. So itâs the ability to somehow â so weâre very good at building computers that are going to be âone-track mindâ types of computers if you want. Theyâre going to be very good at solving these very specialized tasks, and as long as you keep on giving them problems of the same type, theyâre going to continue to do extremely well, and actually better than human beings.
But as soon as youâre falling out of that sort of well-defined space, and youâre opening up the set of context and a set of problems that youâre going to be presenting to computers, then you find that itâs a lot more challenging to build a program thatâs always capable of falling back on its feet. Thatâs really what weâre dealing with today.
Well, you know people do transfered learning very well, we take the stuff that weâ¦
With occasional mistakes too, we are not perfect.
No, but if I told you to picture two fish: one is swimming in the ocean, and one is the same fish in formaldehyde in a laboratory. Itâs safe to say you donât sit around thinking about that all day. And then I say, âAre they at the same temperature?â You would probably say no. âDo they smell the same?â No. âAre they the same weight?â Yeah. And you can you can answer all these questions because you have this model I guess, of how the world works.
And why are we not able yet to instantiate that into a machine do you think, Is it that we donât know how, or we donât have the computers, or we donât have the data or we donât know how to build an unsupervised learner, or what?
So there are multiple answers to this question. There are people who are of the view that itâs just an engineering problem, and that if in fact, you were to use the tools that we have available today, and you just use them to populate these massive knowledge bases with all the facts that are out there, you might be able to produce some of the intelligence that we are missing today in computers. Thereâs been an effort like that called Cyc.
I donât know if you are familiar with Doug Lenat, and heâs been doing this for, I donât know, how many years at this point. Iâm thinking something like close to 30 plus years, and heâs built a massive knowledge base and actually with some impressive results. And at the same time, I would argue that itâs probably not enough. Itâs more than just having all the facts, itâs also the ability to adapt and the ability to discover things that were not necessarily pre-programmed.
And thatâs where I think these more flexible ways of reasoning that are also more approximate in nature and that are closer to the types of technologies that weâve seen developed under the umbrella of neural networks and deep learning, thatâs where I think thereâs a lot of promise also. And so, ultimately I think weâre going to need to marry these two different approaches to eventually get to a point where we can start mimicking some of that common sense reasoning that we human beings tend to be pretty good at.
Listen to this episode or read the full transcript at www.VoicesinAI.com
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
Source: Voices in AI â Episode 90: A Conversation with Norman Sadeh