email print
Blog Post

AI and humanity’s ‘fourth age’: A long-read Q&A with technologist Byron Reese

AEIdeas

Byron Reese, author of the new book “The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity,” organizes human history so far by framing it around three main inventions. We entered the first age when we learned to harness fire and language, the second when we mastered agriculture, and the third came with the invention of writing and the wheel. Now we are on the verge of a fourth age, thanks to developments in artificial intelligence, and to hear Reese tell it, this fourth age promises to be just as transformative as its predecessors.

Byron Reese is the CEO and publisher of the technology research company Gigaom, and the founder of several high-tech companies. You can follow him on Twitter @ByronReese.

What follows is a lightly-edited transcript of our conversation. You can download the episode by clicking the link below, and don’t forget to subscribe to my podcast on iTunes or Stitcher. Tell your friends, leave a review.

PETHOKOUKIS: The obvious question people have picking up the book is, what are the three other ages and how does this fourth one fit in? If the first stage, as you write, is about harnessing fire and language, and the second age is discovering agriculture, and the third age takes up us up till now but began with the invention of the wheel and the invention of writing — all of these are pretty significant inventions and innovations and technologies. If the fourth age is AI and robots, can innovations there ever compare to those other three human advances? They are pretty monumental.

That is a fantastic question. You’re right. We’ve had technological innovation as long as we’ve been a species and I kind of arbitrarily divide that up into these ages. When a technology comes along, sometimes it just helps us out, like penicillin for example. But sometimes something happens that is so profound that it just changes the course of human history forever. Speech was the first one; it’s hard to imagine humans without speech. That’s a technology in its own class.

Now to your question: Is it fair to say AI and robots could be of that caliber? My thinking was that we are number one on this planet because we’re the smartest things on it. And AI is a technology that makes us smarter. And so if we all went to bed tonight and woke up tomorrow with 10 more IQ points or 20 more IQ points, would that alter human history? I think it would.

And I kind of think that because the internet came along and it was a really simple technology — all it did was say wouldn’t it be great if computers could talk to each other using common protocols. They’re not smart. They can just communicate to each other, and just by connecting computers that gave us something like $25 trillion in new wealth. It transformed society and all of the rest. The question is what would happen if technology suddenly made us smarter? If we could outsource what our brains do to a machine and what our bodies do to a machine through robotics, won’t that alter the trajectory of the planet in a meaningful way? That’s the proposition I explore.

Back in the early 2000s I went to a conference on transhumanism, the belief that people are going to incorporate technology to a greater extent into our bodies and we’re going to become something not even recognizably human; we will become post-human. They were very interested in artificial intelligence and nanotechnology. Listening to folks then at that conference, you would have thought this was all right around the corner and certainly by almost 2020 we would see a significant and profound change in our lives.

I don’t think what we’ve seen so far would qualify as the changes that they expected. And even when you read folks today like the economist Robert Gordon, they really push back at the idea that any of these technologies will be nearly as transformative as electrification or better public health or the internal combustion engine. All the big inventions have been invented. All the big discoveries have been discovered. And therefore we’re in for a period of slower economic growth and certainly not the kind of radical societal transformation that one might think if you’re viewing these technologies the way you would view the wheel for instance. So what are they getting wrong?

Well, I do disagree with that viewpoint. Start with the centrality of intelligence and say that everything we’ve accomplished has come from the fact we’re intelligent. We’re not the strongest. We’re not the fastest. We don’t have the best camouflage. We have almost nothing going for us as a species other than we happen to be really smart. Now imagine if all of a sudden you had a way to augment that — and I’m not even talking like science-fictiony augment, I just mean stair-step it up a notch. Our IQs max out at 200 or something and we managed to do everything we could do with that. Just imagine if you get some incremental increases. Or think of it this way: one person with a smartphone that has an AI electrician in it can be a pretty decent electrician, and somebody with an AI in their smartphone that can diagnose illness might not be the best doctor but they are all of a sudden empowered in a really profound way.

AI is a pretty simple technology philosophically. It says, let’s take a bunch of data about the past and let’s study it and make projections about the future. That’s all it is: you take a bunch of data, you study it, you make projections. So what does that mean? That means somehow we now have a collective memory for the planet. All of this data — your actions and my actions and everybody’s actions — are now being remembered.

So for hundreds of thousands years of human history, we learn something and forget it, learn something and forget it. Now what would happen if everybody’s life became the data that made every future decision better? That’s a big deal. That would be a big deal if all of a sudden we broke out of this learning and forgetting cycle and we learned and remembered. That’s what cheap sensors and all of this digitalization of our lives is creating. Everywhere I go and everything I do creates this digital echo that becomes the data which is used to make other people’s lives better.

Just ask how optimal our lives are now. Somebody, maybe Henry Ford, said we don’t know one tenth of one percent of anything. It is not like we know 90 percent of it and so all the big stuff is behind us. We know nothing about nothing. Take this example: There was an antidepressant that some people began taking and they said their smoking cravings went down; that’s how we created a great smoking cessation drug. The point is we stumble through our lives like drunken sailors on shore leave. We make accidental discoveries. That’s how like 100 percent of all the progress we’ve made has happened, and all of a sudden we’re taking control of the data that our world creates; we’re learning how to study it and extract from it.

So the idea that all the good stuff has been invented is beyond farcical. It conveys such a lack of imagination. It is too moment-centric. I just don’t think that’s the case. I think the best days are yet to come.

There are different flavors of artificial intelligence. What are you talking about?

I use a reasonably constrained definition, but you’re right. It’s an unfortunate term because it means two completely different things. On the one hand what we call more formally narrow AI is a computer program that does just one thing. That would be like your spam filter in your inbox. Or what routes you through traffic. And that’s what we’re getting good at and that’s really what I’m talking about. The other kind of AI is what you see in science fiction: general intelligence, a creative computer and all of the rest. Nobody knows how to build that, or at least nobody has demonstrated they know how to build it. It may be centuries off. It may be impossible.

But I don’t think you need that. I think just the simple idea is enough: that we as a species are learning how to save the data our lives generate, that we’re learning how to study that in a systematic way, and that we’re learning how to use that data to make better decisions. Better decisions are these amazing things because they compound in value over time. If every day I make a thousand decisions and I just do like 10 percent better than somebody else, over the course of time that compounds and you pull away exponentially eventually. That is what I think we can do with just simple AI.

So we don’t need to have science fiction AI. We just need to have better and better narrow AI being diffused more widely throughout our lives and the economy, and that alone will be enough to merit a society with its own name: the fourth age.

I believe so, or at least that’s the proposition I’m putting forward. I will point out that if all AI advances stop today, we probably have 20 years to catch up to just do what we know how to do — to, like you were just saying, apply it to more areas of our life. I mean if you just walk through your life and ask how data could inform every decision you make throughout the day, that collectively at a societal level will add up and amount to something.

And when you hear people who are concerned about it, that we’re going to lose control of it, that it’s going to end up killing us — Elon Musk for example is a pro-technology person but very worried about AI — are they only concerned about general intelligence AI or do some of these concerns also apply to the narrow application of AI?

Fantastic question. They apply to general intelligence AI, to this technology that nobody knows how to build. And the logic is simple. It begins with an assumption that not everyone holds: that humans are machines, and your brain is a machine, and your mind is a machine, and consciousness is mechanistic. It’s a reductionist view of the universe that says if you could just break down our brains enough you could build a mechanical version of that. And here’s the logical leap. It says if you build a mechanical version then it’ll have an IQ of 100, then 200, than 1,000, and so on until it won’t even know we exist.

So that is the narrative that says this is an existential threat and that’s what people worry about. The one caveat I will say is there are a lot of people who are worried about narrow AI in a very narrow way, which is they’re worried about the job situation. Is it going to take away all the jobs? Nobody thinks your spam filter is going to go rogue and take over the world. So there are worries around narrow AI, but they’re very narrow in terms of employment, which by the way I don’t think is an issue.

Well, since you brought it up, let’s dig into it. It’s sort of amazing: We have moved rather quickly from being really enthusiastic about what’s happening in this field to quickly jumping to the extreme negative. Even if we don’t get the Terminator scenario, we’ve quickly jumped to the conclusion there will be widespread job loss and five people will own all the robots and have all the wealth. The job loss scenario seems to be very prevalent in people’s minds. I think people can very easily understand how they can lose their jobs. They find it much harder, I think, to understand the other scenario where they work with the robots and we enter a society of abundance. The society of scarcity I think just seems far more relevant and easier to imagine.

I think you’re right. I think we have seen enough movies where what you just described happens — that people, including myself, do something known as reasoning from fictional evidence, and it’s all very compelling. But let me paint you a not-even-rose-colored-glass view of the past. Without taking into account the Great Depression, which was not caused by technology, over the course of two hundred and fifty years in this country, unemployment has always been between 4 and 10 percent. Now, I’ve tried very hard to figure out what the half-life of a job is, and I think it is 50 years. I think every 50 years one of every two jobs are lost. 1850 to 1900, half of all the jobs vanished; they were largely agricultural. 1900 to 1950, same thing. 1950 to 2000, half the jobs vanished; a lot of manufacturing jobs.

So you have to ask, did we maintain full employment and see rising wages while losing half the jobs every 50 years? And I’ll push it further: If I gave you a graph of 250 years of unemployment data and I said look at that graph and find where the assembly line was invented, find where we replaced all animal power with steam power in 22 years, you can’t see it anywhere in the data. So what we know empirically is that we can have these amazing new technologies, we can destroy vast numbers of jobs, and it isn’t that we recover — it is that you cannot even see it happening.

Now, is AI somehow different? Most people agree technology is great. It makes awesome new jobs like a geneticist but then it destroys these low-end, low-skill jobs like order-taker at fast food restaurants. And then people say this: Do you really think that order-taker is going to become a geneticist? Do they really have the skills to do these new jobs? And the answer is no, not at all. What happens is a college biology professor becomes a geneticist, and the high school biology teacher gets the college job, then a substitute teacher gets hired on at the high school, all the way down the line.

The question isn’t if the displaced people do all the new jobs. The question is, can everybody do a job a little harder than the job they have today? That is 250 years of economic history in this country; technology makes great new jobs, and it destroys bad ones, and everybody shifts up one notch.

Now, AI effectively is a productivity tool. Like all technologies, it’s a productivity tool and productivity tools cannot destroy jobs. If you thought that you would propose legislation that required people to work with one arm tied behind their back, because if you did that productivity would decline and you would need a lot more people to grow the food, and you would need a lot more people to do everything, and you would create an enormous number of jobs.

Unfortunately wages would plummet because everybody’s productivity is down. Now, if that’s bad, then technology, AI included, is by definition good. It’s like adding a third arm. It makes people more productive. It increases productivity, which by definition will increase wages for everyone.

I’m pretty optimistic about technology and what it can do for our society. But I think something is missing — and that’s one reason I really like your book. I have a lot of books flowing through my office, and many of them are about how AI is going to take over all the missile systems and it’s going to launch a nuclear war, or about how there will be absolutely no jobs.

What’s specifically missing is some sort of plausible story that people can understand which shows the road ahead, that AI will not create a future in which everyone is either a geneticist or the geneticist’s butler. There’ll be a middle ground, and society will look, in the future, in a way that is understandable. Maybe the trend unemployment rate will be higher or lower, but it will be a recognizable society with a vast spectrum of jobs of different skill levels, and there will be a place for everybody. That’s the concern.

Right. I mean, if you had gone back 25 years in time when the internet came out, and you had said, “Hey, this thing’s going to come out in 25 years, millions of people are going to use it. What’s that going to do to jobs?” You might have said, “You know, I think the stock brokers are in bad trouble, and the travel agents are in bad trouble, and the yellow pages are in bad trouble.” And you would have been right about everything. But what we all would have missed though are all the things that it made.

Nobody saw Etsy. Nobody saw eBay. Nobody saw Airbnb, Uber, Amazon, or Google or Baidu or Twitter or Facebook or anything like that, and all of the millions of jobs that came out of this technology. So, I would say, if you can hold a tool in your hand that has an “i” in it, your productivity just went up, and your wages will go up from that. It’s as simple as that. Anything that makes you more productive as a person is good for you and it’s good for wages.

Right, and if you were forward-thinking, you could point out some of these transformations in the job market. Of course, the classic example is bank tellers. You may have thought the bank teller can be replaced by the ATMs, and we’d have no bank tellers, but what happened is that you could have a lot more branches. Maybe there are fewer bank tellers per branch, but you could open more branches. You did not see a collapse in the number of bank tellers. The labor effects don’t necessarily work out the way people imagine. Sitting in 2018, we have an unemployment rate in this country of about 3.5 percent and an economy filled with jobs that people didn’t know about years ago.

Maybe because we’ve gone through this terrible recession and slow recovery, I wonder if we’re a lot more risk-averse. My concern is that will then feed into policy, where people will be more reluctant to count on these new technologies bringing jobs, and people will be less tolerant of creative destruction — which is really at the heart of your book. You believe that there is creative destruction, and ultimately the gains will be a net positive even if we have a society that looks a lot different. I wonder if people will tolerate creative destruction in the future as they have in the past.

It’s a valid point. But I think we just need to remember anything that makes people more productive is, by definition, pro-human and pro-us. People embrace tools that make them more productive — it’s never the micro narrative that says, “I can give you some tools that will make you more productive. Would you like them?” Because the answer is obvious: “Yes, of course.” It’s only kind of this abstract, macro, what-would-happen-if scenario that gets posed. I think it’s all going to work out well.

I also read and write about autonomous cars, and I tend to focus on the positives — less time wasted just staring at a bumper 10 feet ahead of you on Interstate 95. You can use your time more productively or more interestingly. And to all the tens of thousands of people who won’t be driving out on the highways, it sounds great. Autonomous vehicles will be really great.

So I’ll write or tweet about autonomous vehicles, and people say, “Well, that’s just great. The government will be able to track you wherever you go, and pretty soon it’ll be illegal to drive your own car. I love driving, and the car culture is a big part of America. Now you want to take that away from us?”

Sometimes, the pushback isn’t what you would naturally expect, so I’m wondering what kind of feedback you’ve gotten on the book. Have you gotten a lot of pushback saying this is utopian thinking and there will be all these downsides to technological advances?

No, no. I think people, for the most part, are innately optimistic, and they’re surrounded by pessimism. Look, caution has served us well as a species. Somebody smarter than me said back in the day, it was far better to see a rock and think it was a bear and run away than to see a bear and say, “Eh, it’s just a rock,” and stay put. We became skittish by nature and that has served us well.

I will say that technology has always changed us, and there were people who mourned the passing of the horse culture when the car came along. And there were people who were against the car for all kinds of reasons — they’re noisy, and they cause pollution, and they’re death traps, and all of the rest. And, in the end, the technology persuades them or outlives people, and that’s kind of what happens.

Did you know in ancient times our memories were much better than they are now? In a preliterate time, when you couldn’t read anything, the only way to know anything was to remember it. And so people had some amazing memories. And technology came along, and even Plato said writing is not a technology that’s going to help — writing is not a technology that’s going to help you remember anything, it is only going to remind you of things you’ve forgotten. And yet, our memories are much worse, and we’re fine with it.

And, likewise, we know Augustine was the first person in the 400s who ever saw anybody read quietly, read to themselves. In the past, before that, everybody read out loud. The idea that words would come off a page and go through your eyeball into your brain would have just seemed like witchcraft.

We always had these growing pains with new technology, and then we wouldn’t trade it away for anything. The smartphone was the same way, as were video games, online dating, even buying stuff online with a credit card.

As we get to the end, let me ask you this, because this comes up a lot in Washington. There are a lot of policymakers who are very concerned that America is losing the AI race. China seems to be sinking all this money into research. What do you think is the right framing? Can we look at it as a race like the Space Race, that one country needs to become dominant in AI or else it will lag the other country and be less powerful? What do you think of that metaphor?

I don’t really buy that narrative, to start with.

The narrative that China is ahead?

Well, yeah. I would say it this way, which is: I don’t know that China is ahead of America in AI — it’s like, Google does AI, and Baidu does AI, and Facebook does AI, and to compartmentalize it by nation I think is just not really how it happens. It would be akin to saying, “Oh, China is ahead of America in electricity.” Everything’s going to be electrified. Everything is going to be AI. This is not going to be like a giant Space Race to Mars. Everything is going to be made smart, and it’s going to be made smart in a million different ways by a million different companies in all corners of the world.

Yes, it’s not as if you invent this technology, and then you keep the technology away from others; these technologies diffuse. It’s not as if there’s a wall around a country keeping all the good information in. Many people benefit, and most of the benefits go to people who didn’t invent the technology. Most of the benefits from building Google or Amazon go to the consumers, not necessarily to Jeff Bezos.

Yes, and to directly discuss the business environment in the US, I think if you’re going to start a business anywhere, this is still the best place in the world to do it. And so if you ask in the larger sense how the US is doing in terms of its economy, I think you only have to look at the internet and you have to say: Google, Amazon, Facebook, Twitter, eBay, Etsy — just rattle down all of them and each one of those companies is a testament to the amount of innovation that happens.

Now there are big, impressive companies all over the world and in China as well. And that’s a testament to them, but innovation is by no means somehow impaired or maimed in this country right now. There are so many things you can invent. You can take any business that exists on this Earth and say, “How can I apply AI to that?” And that’s it. That’s like a whole new industry, right there.

You do spend a considerable amount of time talking about the sci-fi AI, or superintelligence, or general AI — the kind of thing that a lot of people think about when they think about artificial intelligence — and what that means for us as humans. Do you think we will ever have general AI? And if so, when will that happen?

I could ask you three questions about very basic beliefs you have, and from those three questions, you can know. The first question is: Are you a machine? You have to start with that question.

I am not.

Well, if you’re not a machine, then no machine can ever be made to do what you do, and therefore, we will never make general intelligence. It all begins with the mechanistic view of the world. I would say that of all the people I’ve had on my AI podcast, 95 percent of them who are in the AI industry, if I say, “Are you a machine?” They go, “Well, of course! What else is there without having to appeal to superstition?” They think the question is beyond debate. And yet, when I put the survey question up on my website, I only get 15 percent of people who think that.

So not many of us think we’re machines. We think that we have souls.

So the argument is this. It says: You’ve got a brain and we don’t know how it works. That’s it. That’s to be generous, to say we don’t know how it works. We don’t even know how a single thought is encoded. Then you have a mind. Your brain does these things we don’t understand, like creativity, imagination, emotion, that your liver doesn’t do. So, how does your brain do these things?

And then you have consciousness. And consciousness is that you experience the world. You can feel warmth — a computer can only measure temperature. A computer can’t feel warmth.

And all three of those things we don’t know how to explain scientifically. You can make a great case that means it’s hubris to say, “Oh, but we’re going to build all that in silicon.” You don’t know that; you don’t how any of that stuff works. How do you say you’re going to build it?

But then the answer is: We are just machines. That’s all we are. Physics governs us. If physics governs us, we will figure it out. So that’s the debate at its core. If you are machine, you’ll see that stuff in science fiction. If you’re not a machine, it’ll never happen.

Before we go, you mentioned you have a podcast. Where can people find it?

You can find me at ByronReese.com or follow me on Twitter @ByronReese. The podcast is an hour. It’s called “Voices in AI.” It’s an hour long, and it’s very dry. So if you’re interested in AI, I would encourage you to listen to it. If you’re not, avoid it like the plague because it will be more interesting to watch paint dry than listen to that podcast.

I’m both enticed and yet duly warned.