Discussion: (6 comments)
Comments are closed.
A public policy blog from AEI
Daniel Kahneman is a Nobel laureate famous for his research into how cognitive biases or quirks lead us to make irrational decisions. Given that background, I was fascinated to run across a video (courtesy of economist Joshua Gans, who writes for the Digitopoly blog) of Kahneman speaking recently at a University of Toronto conference on the economics of artificial intelligence.
In this transcribed bit from that speech, Kahneman talks about what he thinks ever-advancing AI can ultimately do vs. human intelligence. Spoiler: He thinks AI will eventually be a fabulous decision maker, and in many ways already is. (By the way, my post yesterday on some tech forecasts by robotics expert Rodney Brooks makes for a great companion post to this one.) Kahneman:
One of the recurrent issues, both in talks and in conversations, was whether AI can eventually do whatever people can do. Will there be anything that is reserved for human beings? And frankly, I don’t see any reason to set limits on what AI can do.
We have in our heads a wonderful computer. It’s made of meat, but it’s a computer. It’s extremely noisy — it does parallel processing, it is extraordinarily efficient – but there is no magic there. And so it’s very difficult to imagine that with sufficient data there will remain things that only humans can do. Now the reason that we see so many limitations, I think, is that this field is really at the very beginning. I mean we are talking about development that took off — I mean the idea of deep learning is old, but the development took off — eight years ago. So that’s the landmark date that people are mentioning. And that’s nothing.
You have to imagine what it might be like in 50 years, because the one thing that I find extraordinarily surprising and interesting in what is happening in AI these days is that everything is happening faster than was expected. So people were saying that it will take 10 years for AI to beat Go, and the interesting thing is it took eight months. So this excess of speed at which the thing is developing and accelerating I think is very remarkable. So setting limits is certainly premature.
One point that was made yesterday was about the uniqueness of humans when it comes to evaluations. It was called judgment here; in my jargon, it’s evaluation — evaluation of outcomes and basically the utility side of the decision function. And I really don’t see why that should be reserved to humans. On the contrary, I’d like to make the following argument. The main characteristic of people is that they’re very noisy. You show them the same stimulus twice, they don’t give you the same response twice. You show the same choice twice, I mean that’s why we had stochastic choice theory, because there is so much variability in people’s choices given the same stimuli.
Now what can be done with AI — it can be done even without AI — is a program that observes an individual’s choices. It will be better than the individual at a wide variety of things. In particular it will make better choices for the individual because it will be noise free. And it will, that we know from the literature that Colin cited, is there’s this interesting tidbit. If you take clinicians and you have them predict some criterion a large number of times, and then you develop a simple equation that predicts not the outcome but the clinician’s judgment, that model does better in predicting the outcome than the clinician. That is fundamental; this is telling you that one of the major limitations on human performance is not bias, it is just noise.
I’m maybe partly responsible for this, but people now when they talk about error tend to think of bias as an explanation. That’s the first thing that comes to mind. Well this is a bias and it is an error and in fact most of the errors that people make are better viewed as random noise, and there’s an awful lot of it. And admitting the essence of noise means something and it has implications for practice, and one implication is obvious: You should replace humans by algorithms whenever possible, and this is really happening. Even when the algorithms don’t do very well, humans do so poorly and are so noisy that just by removing the noise you can do better than people. And the other is that when you can’t do it you try to have humans simulate the algorithm. And that idea, by enforcing regularity and processes and discipline on judgment and on choice, you improve and you reduce the noise and you improve performance because noise is so poisonous.
Now [inaudible] said yesterday that humans would always prefer emotional contact with other humans. That strikes me as probably wrong. It is extremely easy to develop stimuli to which people will respond emotionally. A face, an expressive face, a face that changes expressions, especially if it’s sort of baby shaped — I mean there are cues that will make people feel very emotional. Robots will have these cues. Furthermore it is already the case that AI reads faces better than people do and can and undoubtedly will be able to predict emotions and development in emotions far better than people can. And I really can imagine that one of the major uses of robots will be taking care of the old, because I can imagine that many old people will prefer to be taken care of by robots, by friendly robots, that have a name, that have a personality, that are always pleasant. They will prefer that to being taken care of by their children.
Now I want to end on a story. A well-known novelist — I’m not sure he would appreciate my giving his name — wrote me some time ago that he’s planning a novel. The novel is about a love triangle between two humans and a robot. And what he wanted to know is how would the robot be different from the individuals, and I propose three main differences. Now one is obvious: The robot will be much better at statistical reasoning and less enamored with stories and narratives than people are. The other is that the robot would have much higher emotional intelligence. And the third is that the robot would be wiser. Wisdom is breadth. Wisdom is not having a narrow view; that’s the essence of wisdom. It’s broad framing, and a robot will be endowed with broad framing. And I really do not see why, when it has learned enough, it will not be wiser than we people because we don’t have broad framing. We’re narrow thinking, we’re noisy thinkers, it’s very easy to improve upon us, and I don’t think that there is very much that we can do that computers will not eventually be programmed to do.
Comments are closed.
1789 Massachusetts Avenue, NW, Washington, DC 20036
© 2018 American Enterprise Institute