Nobel laureate Daniel Kahneman on AI: ‘It’s very difficult to imagine that with sufficient data there will remain things that only humans can do’
AEIdeas
Daniel Kahneman is a Nobel laureate famous for his research into how cognitive biases or quirks lead us to make irrational decisions. Given that background, I was fascinated to run across a video (courtesy of economist Joshua Gans, who writes for the Digitopoly blog) of Kahneman speaking recently at a University of Toronto conference on the economics of artificial intelligence.
In this transcribed bit from that speech, Kahneman talks about what he thinks ever-advancing AI can ultimately do vs. human intelligence. Spoiler: He thinks AI will eventually be a fabulous decision maker, and in many ways already is. (By the way, my post yesterday on some tech forecasts by robotics expert Rodney Brooks makes for a great companion post to this one.) Kahneman:
One of the recurrent issues, both in talks and in conversations, was whether AI can eventually do whatever people can do. Will there be anything that is reserved for human beings? And frankly, I don’t see any reason to set limits on what AI can do.
We have in our heads a wonderful computer. It’s made of meat, but it’s a computer. It’s extremely noisy — it does parallel processing, it is extraordinarily efficient – but there is no magic there. And so it’s very difficult to imagine that with sufficient data there will remain things that only humans can do. Now the reason that we see so many limitations, I think, is that this field is really at the very beginning. I mean we are talking about development that took off — I mean the idea of deep learning is old, but the development took off — eight years ago. So that’s the landmark date that people are mentioning. And that’s nothing.
You have to imagine what it might be like in 50 years, because the one thing that I find extraordinarily surprising and interesting in what is happening in AI these days is that everything is happening faster than was expected. So people were saying that it will take 10 years for AI to beat Go, and the interesting thing is it took eight months. So this excess of speed at which the thing is developing and accelerating I think is very remarkable. So setting limits is certainly premature.
One point that was made yesterday was about the uniqueness of humans when it comes to evaluations. It was called judgment here; in my jargon, it’s evaluation — evaluation of outcomes and basically the utility side of the decision function. And I really don’t see why that should be reserved to humans. On the contrary, I’d like to make the following argument. The main characteristic of people is that they’re very noisy. You show them the same stimulus twice, they don’t give you the same response twice. You show the same choice twice, I mean that’s why we had stochastic choice theory, because there is so much variability in people’s choices given the same stimuli.
Now what can be done with AI — it can be done even without AI — is a program that observes an individual’s choices. It will be better than the individual at a wide variety of things. In particular it will make better choices for the individual because it will be noise free. And it will, that we know from the literature that Colin cited, is there’s this interesting tidbit. If you take clinicians and you have them predict some criterion a large number of times, and then you develop a simple equation that predicts not the outcome but the clinician’s judgment, that model does better in predicting the outcome than the clinician. That is fundamental; this is telling you that one of the major limitations on human performance is not bias, it is just noise.
I’m maybe partly responsible for this, but people now when they talk about error tend to think of bias as an explanation. That’s the first thing that comes to mind. Well this is a bias and it is an error and in fact most of the errors that people make are better viewed as random noise, and there’s an awful lot of it. And admitting the essence of noise means something and it has implications for practice, and one implication is obvious: You should replace humans by algorithms whenever possible, and this is really happening. Even when the algorithms don’t do very well, humans do so poorly and are so noisy that just by removing the noise you can do better than people. And the other is that when you can’t do it you try to have humans simulate the algorithm. And that idea, by enforcing regularity and processes and discipline on judgment and on choice, you improve and you reduce the noise and you improve performance because noise is so poisonous.
Now [inaudible] said yesterday that humans would always prefer emotional contact with other humans. That strikes me as probably wrong. It is extremely easy to develop stimuli to which people will respond emotionally. A face, an expressive face, a face that changes expressions, especially if it’s sort of baby shaped — I mean there are cues that will make people feel very emotional. Robots will have these cues. Furthermore it is already the case that AI reads faces better than people do and can and undoubtedly will be able to predict emotions and development in emotions far better than people can. And I really can imagine that one of the major uses of robots will be taking care of the old, because I can imagine that many old people will prefer to be taken care of by robots, by friendly robots, that have a name, that have a personality, that are always pleasant. They will prefer that to being taken care of by their children.
Now I want to end on a story. A well-known novelist — I’m not sure he would appreciate my giving his name — wrote me some time ago that he’s planning a novel. The novel is about a love triangle between two humans and a robot. And what he wanted to know is how would the robot be different from the individuals, and I propose three main differences. Now one is obvious: The robot will be much better at statistical reasoning and less enamored with stories and narratives than people are. The other is that the robot would have much higher emotional intelligence. And the third is that the robot would be wiser. Wisdom is breadth. Wisdom is not having a narrow view; that’s the essence of wisdom. It’s broad framing, and a robot will be endowed with broad framing. And I really do not see why, when it has learned enough, it will not be wiser than we people because we don’t have broad framing. We’re narrow thinking, we’re noisy thinkers, it’s very easy to improve upon us, and I don’t think that there is very much that we can do that computers will not eventually be programmed to do.

What binds men and women is children and sex, in addition to high-falutin notions. I suppose it would be possible to creat AI with that capacity, but what would be the point?
“And so it’s very difficult to imagine that with sufficient data there will remain things that only humans can do. ”
Of course, where humans excel is in making choices when the data is insufficient. Also, in following an instinct that. on its face and in the face of the data, is irrational. These irrational, against the data, choices are what have advanced humanity by leaps. Sufficient data is okay for incremental advancement, until it is not. See communism.
“All mankind’s progress has been achieved as a result of the initiative of a small minority that began to deviate from the ideas and customs of the majority until their example finally moved the others to accept the innovation themselves. To give the majority the right to dictate to the minority what it is to think, to read, and to do is to put a stop to progress once and for all.”
Mises, Ludwig von (1927). Liberalism
All of mankind’s progress has been due to noisy thinkers.
I think by ‘noisy thinking’ he means allowing to be biased by irrelevant information or distracted by pressures that should not be influencing a decision. The small minority you talk about are not irrational, but they are often the rational ones that look at what really matters while majority had not recognized it. Ultimately only decisions and actions based on correctly understood and sufficient data and rationality will be adopted in the long run because other options will lead to failures.
His statement ‘Humans should be replaced by algorithms wherever possible’ may seem dramatic but if you think of decision making as carefully considering possibilities and weighing of pros and cons and choosing rational actions based on known information, then that is ‘calculations’, so machines should be used wherever we want to ensure rational unbiased calculation. Today no one would say the checkout registers should be replaced with humans who add and subtract.
In fact mankind’s progress have been due to broad and wise thinkers (broad framing as you can see in the last para)
I think an AI can try “against the data” strategies and see if they work, just like humans can. Isn’t that just an exploration-vs-exploitation problem, see in reinforcement learning? These algorithms can recognize that optimizing a known strategy runs the risk of not finding a better strategy, account for that, and search accordingly. I’m no expert though.
“And so it’s very difficult to imagine that with sufficient data there will remain things that only humans can do.” Said another way, the AI only knows what it knows from the data it has. Until we get to the Artificial Superintelligence (ASI) that can roam to acquire data freely, can learn, infer, and draw conclusions, the issue of sufficient data will be the limiting factor.