Discussion: (0 comments)
There are no comments available.
A public policy blog from AEI
More options: Share,
Should we be so pessimistic about the role of technology in our lives? Has 5G been overhyped? And when will autonomous cars be a reality for consumers? On this episode, Andreessen Horowitz’s Benedict Evans discusses why we respond the way we do to technological change and where the tech industry is taking us next.
Benedict Evans is a partner at the Silicon Valley venture capital firm Andreessen Horowitz (‘a16z’) and also runs a popular email newsletter. What follows is a lightly edited transcript of our conversation. You can download the episode by clicking the link above, and don’t forget to subscribe to my podcast on iTunes or Stitcher. Tell your friends, leave a review.
PETHOKOUKIS: Maybe it’s my perception, and maybe it’s wrong, but it seems that we are in a period of techno-pessimism. It seems most of the things that I read are about the problems caused by advancing technology, whether it’s concerns about privacy, concerns about job loss, concerns about too much corporate power, concerns about tech’s impact on democracy, oligopoly, inequality. Your life is technology and I assume that you think it’s going to continue to make our lives better in the future, not that there aren’t trade-offs. So what is the techno-optimist’s case right now for where the technologies that you track will take us?
EVANS: There’s a bunch of different ways of answering that question. One of them is that when a technology is very new people say it won’t work and then it works and people say, ‘oh my God, this is amazing look what I can do.’ Then over time you discover both the positive and the negative implications of that and then a little bit further on things settle down and you just kind of work out how it is going to be. You could apply that to everything from railways, to aircraft, to cars, to television. There’s always a pendulum that swings back and forth a couple of times before you work out the equilibrium of how we understand these things and what the good things and the bad things are around them. We’re definitely in one of those kind of pendulum swings right now. Everybody on the Earth has got a smartphone. Boring. But have you seen this terrible thing that’s happening? We’ve had the good stuff and now we’re kind of working out what we think about some of the bad things that might come out of that. How optimistic is one? Well, I think we’re still in the pretty early stages of discovering what software can do, what the fact that everybody in the world is connected can do. Where in the deployment phase as opposed to the creation phase. Now, as the Frenchman said, ‘hell is other people,’ so sometimes connecting everybody has some bad consequences, but over time one tries to hope that consequences will be good.
Do you think there was too much optimism even about the technologies where they are today that we would be able to connect everybody, and have access to information, and be able to share our ideas and that would produce more fulfilling lives, a better society, better democracies? Do you think there was too much optimism about that? Certainly, when I pick up the newspaper, it seems like all this connectivity is having some huge externalities that perhaps we weren’t talking about a few years ago?
If you’re going back to the beginning, any kind of radical new thing tends to come with utopianism because you have to be kind of utopian and crazy to believe in it at all. That’s what the internet looked like 25 years ago. People said it was going to end war and end conflict and there would be universal perfect democracy and there would never be any more hatred because everyone would understand everybody in the world. Obviously, that missed some pretty basic things about how the world works and about how people work. But people said the same thing about aircraft, there would be no war because of aircraft because there are no borders in the sky
There was a movie, I think it was from the 1920s called “The Shape of Things to Come,” which showed aircraft run by an elite bringing peace to a fallen world.
There’s always a little bit of utopianism in the creation of some radical new thing because otherwise, you have to be a bit utopian or have a utopian mentality even to believe that things possible are worth doing. Then you put it in the hands of billions of people and not everybody on Earth is very nice and not all of the dynamics of human interaction always produce positive outcomes. That gets translated into this new form and sometimes that results in serious problems, sometimes it results in a moral panic. Generally, it produces both. If you think about how people used to talk about television turning us all into zombies or how novels were going to make young ladies lose any sense of morality –– that was a common talking point toward the end of the 19th century. There’s always a pendulum swing. Part of this is a collision of the capability with humanity, and as I said, not everybody on Earth is nice and not all dynamics of human interaction necessarily produce positive outcomes, but I don’t think we’d like to get rid of television or aircraft or electricity. The long, general direction of humanity tends towards improvement.
We had the economist Robert Gordon on the show the year before last, and he’s been famously pessimistic about the broader impact of these new technologies, particularly artificial intelligence, as game-changing technologies that will really change our lives, that will produce a lot more economic growth, that will make us a lot more productive. As you look at these technologies today, what can we expect our lives to look like 10 years from now, 20 years from now? Will there be much faster growth? Are we all going to be more productive? Will there be a noticeable difference because of these technologies?
In this context, I will quite often refer to the Billy Wilder and Jack Lemmon movie, “The Apartment,” from 1960 where Jack Lemmon is a clerk in an insurance company and there’s 10,000 or 15,000 people in this building and everybody has a typewriter and a Rolodex and an electromechanical adding machine. Basically, everybody in that building is a cell in a spreadsheet and Jack Lemmon’s character spends 20 years of his life doing something that today is done in three cells or two cells in a spreadsheet. Then they bought a mainframe in say, 1965 and then we went through a successive wave of automation after that. The thing about each wave of automation is that it gets rid of a bunch of jobs and creates a bunch of new ones. There’s quite often some pretty painful friction around that process, but I don’t think any of us would want to go back to a world in which 90% or 95% of us were agricultural laborers or 80% or 90% of us were doing repetitive manual working in a heavy industry, bashing pieces of metal with a hammer in our hands. That process is a conversation that an academic economist could talk about much more than me, but my point is I don’t think there’s anything specific to machine learning that is different from just that process happening over and over and over again. We no longer have lots of typesetters working in printing works and working in newspapers, but we have a lot of graphic designers. Job creation continues, just in parallel with the destruction of jobs. I should point out, again going back to “The Apartment,” Shirley MacLaine plays an elevator attendant. Her entire job is to stand in the elevator and someone gets in and says ‘floor 15, please’ and she presses button 15. In the 1950s or ‘60s, I think the US peaked at 80,000 elevator attendants. Those jobs all went away because of automation. I don’t think we want them back.
I’ve certainly heard some of these analogies before, and certainly the historical lesson, which is just to focus on jobs, jobs will be destroyed, jobs will be created, and we’ll have a much higher standard of living.
To go one level into this, there is one argument that says this is what always happens and there’s nothing new here, it’s just another piece of automation like all the pieces of automation we’ve had before. The other narrative would say we’ve been automating progressively higher and higher level human functions. There’s a famous Russian painting by [Ilya] Repin called “Barge Haulers on the Volga” and it’s a painting of 20 guys leaning against the cable, pulling a barge up the river and just on the horizon, you can see a steamship. This is human beings as beasts of burden and those jobs went away and they weren’t replaced by other jobs for human beings as beasts of burden. Then you kind of go up and up and up and so you require, so to speak, more and more intelligence and more and more insight, or skill, or craft, or genius, or whatever it is as you automate more and more kind of lower-level capabilities. That theory would be, eventually we’ll get to the top and we’ll have gone all the way up and there won’t be any jobs for anybody who isn’t capable of getting a PhD because any other job lower than that will be automated away, or there won’t be any job that doesn’t require an IQ of at least 160 because every job below that will be automated away, so we’ll get to the end. There’s two problems with that. The first is, even presuming that this is the case, we are not five or 10 or 20 years away from this, we’re 50 or 100 or 200 years away from that, from the actual ability to create computers that have that level of understanding. Very, very few people actually working on machine learning think that we are within decades — certainly within 20 or 30 years — I would say the most aggressive people would say maybe in 20, 30, 40 years we might have HAL9000, maybe. But a lot of people think that’s way too aggressive. That’s one argument. Even if you buy all of this, it’s not like it will happen in the next five and 10 years. The other argument is that that’s a fallacious way of thinking about the process of automation, that a graphic designer is not somehow more creative than a typesetter, it’s just different. I kind of tend toward that argument. Either way, come back in 30 years and let me know.
If we’re talking about a timeline, the timeline question I get asked most about is the autonomous car or driverless car timeline. When I first started talking about this, people would think that this sounds like science fiction and it’ll never happen. Then people started worrying, ‘gee, it’s going to happen. It’s going to happen any day now and truck drivers will lose their jobs and delivery people will lose their jobs.’ What is the reasonable timeline of when we will see autonomous vehicles, and again, we have to define what we mean by that — being in a vehicle where you can go check your email, take a quick nap, and it will be just fine operating in a major metropolitan area.
First observation, people talk a lot about truck drivers. From memory, there might be 5 million people that are listed as truck drivers but actually only about half of those do long-haul trucking, which is the place that this is really applicable, because the FedEx driver is getting in and out of the truck and walking into buildings all day. Automating that is a totally different conversation. If you’re talking about automating a truck or maybe automating a taxi that’s very low, single-digit millions of people in the USA. If you subscribe to the view that jobs are destroyed and jobs are created, that should fit inside the churn, particularly given the average age of a long-haul truck driver is something in their ‘50s or ‘60s. That’s just an observation. It is actually not a great example of enormous disruption because we’re not talking about a huge number of people in proportion to the overall working population. To answer your question, I think of autonomy as a question of where rather than a question of when because some environments are much easier to automate than others. The freeway, even though you’re going faster, is much easier to solve than suburban streets because you don’t have side streets, you don’t have children running onto the road, there are no trees, there are no signals, there’s no oncoming traffic, nobody is going to be stopped blocking the road. There’s much less that can happen that could be weird. There’s way fewer corner cases and it also becomes practical that you can have remote operation if you do have a corner case. Freeways are at the easy end of the scale. Phoenix is easy, it’s relatively easy. It’s easier than San Francisco. San Francisco’s easier than Boston. Boston’s easier than Naples. Naples is maybe easier than Katmandu or than Bangalore. Pick a city. We’re not going to wait until it works in Bangalore before we deploy it in Phoenix. There is this kind of terminology in the industry of level four, which can drive itself most of the time. Level 5 can drive by itself all the time, but all of the time seems kind of meaningless to me. What do you mean by all? You’ll have a vehicle that can drive by itself for some portion of some journeys and it will stamp on the brakes if somebody jumps out in front of you, but you need to be driving. Supposing Cambridge is autonomous only at weekends and you drive to the edge of town and you have to park your car and take an autonomous golf cart, a self-driving golf cart. Is that golf cart fully autonomous? Well, it doesn’t leave Cambridge, but it drives itself around Cambridge at 15 miles an hour. I think there’s a much more multifaceted, multimodal conversation around what autonomy means. You imagine a garbage truck that can follow the crew down the road at walking pace and when they get to the end of the road, they get in and they have to drive it back to the depot. Is that autonomous or not? Well, it’s autonomous when the crew gets out, but then the crew gets back in and drives it. There’s a lot of different aspects to what this will mean. It’s not as binary as just saying well, there will be a self-driving vehicle.
For sure, and I was trying to get a little bit at the question by putting some constraints on it as far as location, but when I talk about this issue with people there tends to be a lot of concern about privacy issues, that we’re all going to be banned from driving cars at some point. They said, ‘well, how will this make our lives better?’ Obviously one way you can say is by having fewer accidents and many tens of thousands of people fewer dying on the highways. To get to a point where that’s true, where you decrease auto deaths by 90%, how widespread does this technology need to be adopted and how good does it need to be?
The way to give an accurate answer to that question, you have to analyze where the accidents are happening and how does the places where the accident happened map to the kinds of roads that are easy to do autonomy on. I don’t know what percentage of the deaths are happening on highways, but highways will happen earlier than suburbs. Maybe all the deaths are in suburbs and all the autonomy will be on highways, for example. There’s kind of a modeling question within there that you’d have to go and do. Second, I think there’s a question of herd immunity, for want of a better term. That’s to say, if your car even has assisted cruise control, which means it will pound on the brakes if somebody in front of you stamps on the brakes, it’s not autonomy at all. That’s a feature-phone kind of car more than a smartphone kind of car, but that will still start saving lives. The backup indicator that will stamp on the brakes or the blind spot indicators, as these key functions start getting more widely deployed and the interfaces start getting better so you don’t just have like 300 flashing lights on your dashboard — if your car won’t rear end people then that’s not just your accident, but also somebody else’s accident, so you get kind of a herd immunity effect as you start getting quite a small proportion of these vehicles into the fleet, and disproportionately fewer accidents will happen. Then you get a break point where you start having areas that are autonomy only. If you have streets or particular freeways that are autonomy only or particular cities are autonomy only or autonomy only at night for delivery vehicles in Manhattan, then you kind of start peeling off particular use cases and then you say, well what are the accidents that are associated with those particular use cases? Such as I said, it’s kind of a modeling question that I don’t have those numbers in total. For the timing question, I would say we are within a couple of years of having a lot of semi-autonomous trucks on freeways. This is a relatively simple problem. A car that can drive itself from San Francisco to Boston without stopping, that’s more on the sort of five year, maybe even 10 year, sort of horizon. One of the interesting things here is that the same set of dates mean different things to different people. If you’re in the car industry and you say five years, well, that’s kind of halfway through this product cycle or maybe through the next product cycle so you’ve already ordered the equipment for stuff in five years. Whereas in the tech industry, when you say five years, you say we think we probably know how to do it, but we can’t do it yet. When you say 10 years, you’re really saying, well, it’s not science fiction in that we think we have the things that would make that possible, but we really have no clue how we would do it. Anything beyond 10 years, you’re getting into science fiction territory or science fiction that doesn’t break the laws of physics. Whereas, again, in the car industry you say 10 years, well, that’s the next model.
It sounds like we’re going to have cars with steering wheels and that humans can take over for some time.
I think one would think in kind of multi-decades. The obvious default way of thinking about this is multi-decade transition — of course there’s also the electric transition happening separately, which is kind of independent of this — how long for the first autonomous car, how long for fully autonomous, whatever that means, how long for all vehicles in that category to be autonomous? How long before all new vehicles are autonomous? How long before enough of the installed base, enough of the fleet is autonomous that things start changing? How long before you can start saying no manually driven cars are allowed on this road? There’s a lot of huge, fuzzy assumptions in that but it’s hard not to see that being a multi-decade process. On the other hand, you could imagine a city that says our garbage trucks are going to switch to this fully autonomous mode next year and all the garbage trucks are fully autonomous. Or all of the delivery in Manhattan has to be electric and semi-autonomous in five years. You could imagine forcing functions like that that would drive particular use cases to switch much quicker.
How much of an obstacle is regulation? Do you think governments are figuring things out and creating a broad framework to facilitate this technology? Do you think policymakers are still struggling?
There’s a bunch of iteration. There’s a bunch of regulatory competition. There’s a bunch of dialogue going on around quite how one thinks about this stuff. Obviously, there’s kind of an interim question of how do you regulate people who are testing these systems and using them to collect data. What happens when a car is on the road with no steering wheel? By now, a lot of people have probably seen the New Yorker cartoon of two autonomous cars. One of them has police written on the side of it and the cop is saying to the driver, ‘does your car know why my car stopped it?’ You get to a bunch of questions around who is responsible for the safety system, where does the certification sit, whose fault is it if it hasn’t been built properly, who is maintaining it. We work that stuff out in the same way that we had to work that stuff out with automobiles, but over a century ago. I don’t tend to see regulation as something that sort of slows this stuff down, partly because it’s not like it’s all sitting in the garage waiting to go right now anyway. I think it’s something that goes hand-in-hand with the evolution of different use cases. As I said, to me, the interesting bit of the regulation is not so much telling car companies mandated safety requirements. It’s more questions, like as I said, what happens if Manhattan says all delivery vehicles at night and on the weekend have to be electric and/or have to be level three autonomous or level four autonomous.
Another technology I get asked a lot about is next generation wireless, 5G. I want to ask you whether you think it is overhyped or not hyped enough. I think about 5G and AI, and I think about China and you often hear that there is an AI race between the United States and China and there’s a race for 5G supremacy between the United States and China. I’m just wondering what you think about that metaphor of a race between these two countries and these technologies.
There is 5G and then there is AI and then there’s China and those are very different conversations. As we talked about 5G, I’m kind of reminded of the immortal scene in “This is Spinal Tap” where it says, ‘well, this is one louder isn’t it?’ 5G is one faster. I think for general consumption, for most general purposes, one should ask what does 5G mean? The answer is exactly the same as what does 4G mean. That is to say, it will add capacity, and to some extent, speed to cellular networks and it will allow operators to continue adding capacity without the network falling down, because 4G is starting to fill up. Will any consumer really know that they’re using this? No. Will there be some fundamental thing that you can only do with 5G and couldn’t do with 4G. For a consumer, no. Almost certainly not. Now that said, as the pipe gets bigger, stuff starts happening that you probably wouldn’t have done on the old pipe. Very obviously, it would have been really tough to do Snapchat on 3G or even 4G, but when everybody has 50 and a 100 megabits per second coming into their phone — I mean never mind, that most smartphone use actually happens at home on Wi-Fi — but when you’ve got 50 or 100 or 200 megabits per second on your phone, then you can bet your assumptions about what kind of applications you can build change, so you get more heavy on video, you get more heavy on animation and audio and a bunch of stuff that you wouldn’t have done previously. That’s kind of the same as saying what do I think about DOCSIS 3? The cable companies will deploy it, it will cost them a lot of money, and our internet connection will get faster. What’s the killer app for DOCSIS 3? Well, if you really had to answer, you would say Netflix, but I don’t think anyone was looking at it. I don’t think that would’ve really have been a useful way of thinking about it. That’s kind of the general answer for 5G. However, what is new in 5G that’s interesting, particularly on the enterprise side, is what’s called network slicing, which is you can segment specific pieces of capacity for particular services. Going back to our autonomous conversation, suppose you have a truck that’s fully autonomous on the freeway, but needs human operation as it goes through the suburbs to get you a warehouse. Maybe you have somebody sitting in the truck, sleeping or reading or something. Maybe you have remote operation and maybe the cellular operator says we will have guaranteed 50 megabit capacity at this connect and at this latency from this point on the freeway to the exit and then from the exit along the route to the warehouse, and it doesn’t matter how many people are using Snapchat in the neighborhood, your capacity is ring-fenced and segmented, so this will always work. You couldn’t do that with 4G, because it ultimately was still sharing the capacity with Snapchat or YouTube or Netflix or whatever anyone else is doing on the network. With 5G you can segment capacity, so there’s a bunch of interesting conversations around that and around connecting devices, say like the security cameras in Times Square are connected by 5G and the connection will hold up even on New Year’s Eve because you segment the capacity. Pick an example. There’s a bunch of those kind of interesting niche business applications — I mean niche is billions of dollars in this context, but for a consumer or for the broader economy, it’s just one louder. It’s just more bandwidth, just like the new version of cable internet or the new version of DSL. It’s just like 4G, it’s just more bandwidth.
There is an interesting China conversation here because when you talk to people at cellular operators or mobile operators, they will tell you that the best kit both by price, but also by efficiency and engineering metrics, comes from Huawei, which is why they are uncomfortable with the US’s national security conversation around do you buy Huawei equipment or not. The reason that we’re having that conversation is because it’s not like ‘well, we can easily do without it.’ It’s actually the best stuff, so people really want to buy it. So that’s the 5G conversation. Then there is AI. Just as an aside, there’s a joke in research circles that AI is anything that isn’t working yet, because as soon as it works, people say ‘well, that’s just a database, that’s just computation, that’s just statistics or something.’ It’s extremely useful to not talk about AI, because that almost kind of triggers a bunch of hand waving and people imagine like Skynet and HAL 9000. It is useful to talk about machine learning, which is specific technology that works in a particular way, and does solves a particular class of problems and doesn’t solve a bunch of other problems. Machine learning are often compared to relational databases. It’s like a fundamental step change that enables a whole bunch of things like Just-In-Time supply chain. You couldn’t have a Just-In-Time supply chain without relational databases, but nobody looks at McDonald’s and says they’re a relational database company. Nobody looks at Walmart and says that’s a relational database company and no one worries about whether China has more relational databases. At that time it would have been Japan. No one was sitting and saying, ‘oh my God, Japan’s got a lead in SQL.’ That was not a meaningful way of thinking about it.
That’s kind of a general way of also thinking about machine learning. This is an enabling technology that will be in everything and there are some bits of it where cutting-edge work is done and it will give advantages of certain companies. The interesting thing about machine learning, partly because it grew out of an academic background, is that almost everything gets published and made open source immediately. As soon as Google works out how to do something, they publish it. It’s not sort of staying inside the company as proprietary information in the way making blue light LEDs or making lasers or making better hard disks or memory chips was kind of secret and proprietary. Everything gets published, which, of course, poses a bunch of interesting questions around things like export regulation. Are you going to tell people that they can’t export a mathematical formula? Then there’s the China conversation in here. On the one hand, again, I mentioned Japan before. In Japan in the ‘80s had this whole strategy around producing next generation supercomputers. This turned out to be a complete dead end. First of all because supercomputers were kind of a dead end in the context of broader tech industry, but also because it turned out this wasn’t something that responded very well to industrial policy. Computing became much more of a bottom-up thing, particularly around software. There is clearly a Chinese government push around machine learning. There is a certain amount of debate and there is unquestionably a huge amount of good work being done in China, which sort of reflects the fact that China now has a lot of good universities, which really wasn’t the case 20 or 30 years ago. There’s a lot of good academics in all sorts of fields in China as I gather, this isn’t my field, but from what one reads. There is a certain amount of debate as to how much of the machine learning work that’s being done in China is actually cutting edge and driving stuff forward and how much of it is like Tencent, which has something like 15 different product teams and each product team has got 35 machine learning engineers and they will basically be building the same stuff over and over again for each other. That is to say, there is a certain amount of pushback from some people in China on the lines of “don’t count the pants,” which is kind of like counting tractor production. It is not necessarily a great way of indicating what’s happening here. In fact, hilariously, the Chinese government and the relevant ministry produces every six months a report on the state of the internet. It’s going back to the late ‘90s and until very recently, every six months they produce a table showing the total file size of all pages on the internet in China in bites. You have this 50 digit number in bites. This is the total file size of all of the JPEGs on all of the servers in China. …There’s more web this year comrades!
I think the China-America question is not really a tech question so much as it is a geopolitical question and tech features in it. Just as how fifty years ago we would have been talking about steel production or automobile production. 100 years ago it would have been railroad production. Now it’s tech production, whatever that means. I think you could take a 500-year view and say it is 1700, the big powerful economies were the ones that had fertile land and lots of people and a peaceful, more or less competent government. That meant China and the Mughal Empire were the biggest economies on Earth. We’ve all seen these charts of percentage of the global GDP over time. Then Western Europe invents this way that you can get a much bigger economy with the same number of people and Western Europe and the USA do it and most of the rest of the world doesn’t do it for a bunch of reasons. In the last 20 or 30 years, China, and to some extent India, have started doing it too, which is some sort of free market and industrial economy. The result is that we are reverting to the mean, where if you’ve got a huge number of people and a stable government and more or less a rule of law and you just let people get on with it, then you get rich. I made a chart about this a while ago, for example there’s a point in time when Britain made all the railway locomotives because Britain invented railway locomotives, and that didn’t last. Not because Britain started making bad railway locomotives, but because Germany, France, and America made them too so Britain’s share of locomotive production went down because other places had people and economies. The biggest industrial economy went from being Britain to being Germany to being the USA. It didn’t then become China, even though China was bigger than the USA because China had a civil war and foreign invasion and communism and Mao and everything else, but now they don’t. As the transition went from Britain to Germany to America, it should deterministically go to China. This is not a tech conversation, this is a macro-global policy question, but all things being equal that’s sort of what ought to happen. Very often the conversation around China sort of reminds me about the paranoia about Japan in the 1980s, but the difference is Japan has half the population of America. The conversation is basically, Japan is going to become a middle-class industrial, free market economy. It will get much richer and all things being equal it should rise to about half the level of America, but it shouldn’t end up being bigger than America because Japan isn’t bigger than America. China is bigger than America, so all things being equal, you would sort of expect deterministically that that figure should go up. Now, there are all sorts of reasons why there are problems with that narrative and reasons why it’s more complicated and reasons why China might slow down but I think that would be my super high-level deterministic model for thinking about this stuff, and particularly thinking about why this is different from the paranoia about Japan.
I don’t think I’ve used the phrase Big Tech thus far but I’m going to use it now. There’s a lot of interest in Washington about regulating these companies maybe breaking them up in some fashion, and I think one of the foundations that intent is built on is the idea that the biggest tech companies — whether it’s Google or Facebook or Amazon — there’s something different about technology. Maybe it’s all the data that they have access to that makes it that these companies are unassailable, they are forever dominant companies and the source of churn we’ve seen in the past among supposedly forever dominant companies that’s changed and no one can challenge these companies, therefore, we have to regulate them or break them up. Your high-level thoughts on that?
There are three or four separate pieces to that. One of them is are these companies invulnerable and can nothing change? Is there something different now when compared to Microsoft or IBM or Standard Oil in the past? Another is your answer to that question can be either yes or no, but that doesn’t mean that you don’t regulate specific things today anyways. It is like the old Keynes’ line ‘in the long run, we are all dead,’ even if you’re absolutely convinced that in 20 years, Google, Apple, Facebook, and Amazon will have disappeared off of the face of the Earth, that doesn’t tell you whether or not you should regulate X or Y today or not. That should actually be an independent conversation. Then there’s a subset of that, which is what do you actually do? Do you “break them up?” I think, maybe come back to that. I think that’s an intellectually lazy force of habit that breakup is somehow the only toolset that you have. There are a bunch of reasons why it’s actually a very bad way of thinking about it. If you think Google, Apple, Facebook, and Amazon are huge problems and something has to be done about them, it doesn’t follow that breakup is the thing that would actually have any effect or that would be a meaningful solution. You can kind of pull those three questions apart: Are they invulnerable? Do you have to do something? And what specifically would you do? To the first point, in a funny way, this kind of reminds me of what we were talking about earlier around job creation. If you were sitting in 1980 and you could say, ‘oh my God, these PCs are going to get rid of all of these print-setting jobs,’ you would not have been able to predict that there would be an explosion in the number of graphic designers. It’s always easy to see the jobs that are going to go away. It’s always difficult to see the jobs that are going to get created. The problem with this is, on the one hand, you can sit and say well, there will be new jobs, but I can’t tell you what they’re going to be, just have faith that there will be new jobs. That’s kind of an irritating assumption it is an unfalsifiable assertion. The problem is the counter to that is that this process that has been going on for the last 250 years is just going to stop right now. That’s also a problematic assumption. It’s probably easier to take the empirical model that says there are always new jobs, but we can’t predict what they are. You need really good reason as to why that process is going to stop. You don’t need a really good explanation as to what the new jobs are going to be. You need to prove why this job creation is over. I think the same point applies to tech companies. Let me unpack this a little bit more. There are a couple of phases of market creation. There’s a phase of market creation where there is a bunch of people fighting it out and it might look like somebody has won, but then they fall behind and they get overtaken and then one company wins. This is like when people said that Yahoo! or Myspace would be eternal. This is like looking at Lotus and saying well, Lotus will be eternal and actually Microsoft Office wins, Facebook wins, Google wins and so on. That’s the combat phase. Then there’s a phase where the market has been established, it’s mature and you’ve got one or two winners and they’re big and solid. How do you overturn those guys? Because that’s a very different conversation to Facebook overturning Myspace. That was when the market was immature. When the market is mature, it’s a different conversation. What’s happened historically is that not that somebody overturned the winner, but that the whole market kind of became irrelevant, or it ceased to be the focus of dominance within the broader tech industry. IBM’s mainframe business grew like five or 10 times in terms of installed computing base from 2000 to 2010. They sold way more mainframes, and this is like five or 10 years after everyone would have said mainframes are dead and mainframes are like the things from the ‘60s that have disappeared. The mainframes business is still there. In the UK, that value-added tax system runs on DEC, runs on digital equipment computers. DEC hasn’t existed for 15 years and it’s still running on those computers. Mainframes are still around. IBM continued for 20 years to have a great business around mainframes. The same thing with PCs. Microsoft won PCs, then first the web and then mobile removed the whole basis of Microsoft’s dominance in the computing industry. The web means no one writes Windows applications anymore and then smartphones, IOS, and Android means Windows PCs are no longer the center of the creation of computing either. Microsoft has still got a great business. There’s still one and a quarter billion PCs out there. There’s maybe a billion PCs out there running Office and Microsoft is shifting to a services subscription. But Microsoft is the new IBM — I mean not the new IBM now in that IBM is in trouble — but Microsoft is a Big Tech company that has a big services business and makes lots of money, but no one’s afraid of Microsoft anymore. Just as nobody was afraid of IBM in the 2000s, even though they were still a big business. What happens is, you build your castle and nobody gets in, but then the river changes course. Your castle is now just kind of off in the middle of the plains somewhere and people can see it, but no one really cares. You built your castle on the side of the Rhine and you’ve been charging a toll on every boat that goes past for the last 500 years and then the Rhine changes course and your castle is still there, but you’re not getting any tolls anymore. That’s what happened to IBM, it’s what happened to Microsoft. It seems inevitable in the same way that job creation is inevitable and it seems inevitable that there will be some new fundamental trend like the web, like mobile, like social, that comes along and makes and moves the marketplace somewhere else and moves the questions to some other place.
How much of your time do you spend figuring out what’s going to change the course of the river?
Quite a lot. There are periods in time when you’re in the vertical part of the S-curve like five, six years ago smartphone is the thing. 10 or 12 years ago, we were trying to work out what the thing is and smartphone actually wasn’t really on the list. No one was waiting for the iPhone before it appeared. Then you think ‘okay, this is the thing, it’s exploding.’ Today, somewhere between three and a half and four billion people have got a smartphone, so that’s the thing. Now we think about what’s the next big thing? What will that mean? It might come from some completely different place. Microsoft was freaked out by open source and Linux, but Linux didn’t affect the desktop. There was a whole thing that Linux is going to take over Windows on the desktop. No, that had zero impact on the desktop. What changed was the web made it so that no one writes Windows applications any more. The same thing now –– is it cloud? Should we think about cloud, machine learning, and cryptocurrency? The conversation just becomes completely different. We’re not talking about the fundamental levers that that company had. Those leavers are about something that ceases to be important.
Do you think there’s a chance that the next big thing is something that you’ve never written a word about?
On what timeline? Like on a 10 year timeline, I’m 100% certain it will be something I haven’t written about. Nobody was really paying much attention to machine learning before about 2012. In 2012, if you were a computer science PhD and you said you wanted to work on neural networks, that was a dumb idea from the ‘80s that had never worked and you’re ruining your career. Working on VR was another dumb idea from the ‘80s that had never worked that started working because of Moore’s Law apart from anything else. Will there be a cryptocurrency? Six or seven years ago, invisible more or less except for tiny number of people. I had not really thought about cryptocurrency at all before I joined a16z. Yes, it’s a truism that there will be new fundamental things. To go back to the super high level here, one of the ways that one hears antitrust people talking about this is by drawing comparisons with railways and with Standard Oil. I think there are two problems with this. There’s a specific and a general problem. The specific problem is, what Standard Oil did was they owned the refineries and the pipelines and the gas stations and rival gas stations couldn’t get any gas and rival refineries couldn’t get their product to market and rival pipeline couldn’t either. You’re bundling. Google and Facebook and Amazon don’t do that. They’re actually much more like Walmart, Amazon most obviously. How would you break up Walmart? Like what would you do? You can’t break it up geographically. That would just create local dominance. You could split the groceries from the clothes but that doesn’t make any kind of sense.
Apparently you split Zappos from Amazon and Waze from Google.
Which is trivial. They’re totally peripheral, unimportant businesses that have no effect at all on the market dominance. This is kind of the problem. This is what I said in my opening response to your question, you can answer yes or no to ‘is the dominance permanent?’ That doesn’t change the question of should you regulate. Then when you say ‘well, what should you regulate? What would we do?’ It doesn’t follow that breakup is the only thing and there’s a bunch of reasons why breakup per se is actually a really bad way of thinking about what would have any meaningful effect on these companies. There was a coherent reason why breaking up Standard Oil made sense. It’s much harder to apply that argument to Walmart or Amazon. I think the other point here is that oil was a thing for 100 years and railways were thing for at least 50 years, cars come along and then aircraft, but railways were a thing. Microsoft basically achieved dominance in PCs by the early 1990s. Windows 95 kind of puts a seal on the victory. You could run it a little bit earlier, you could say they had won by the late ‘80s. Netscape launches just at the same point, so the new thing that makes their dominance irrelevant is already in the market by the time they’ve won. Microsoft’s dominance in tech, the period in which everyone in tech is afraid of Microsoft, lasts maybe 10 years, not 50 years or 100 years. Look at the speed of the cycle here — and then compare that with how long it takes to get an antitrust action through. You’ve seen this with the EU attempts to intervene with Google and the EU is fining Google for products that they launch, shutdown, replace, shutdown the replacement and then re-launch again, to the point that nobody at Google can remember the product they are fined for. That’s obviously an execution question, but I think that there’s a deeper point here, which is that there is a presumption that Google’s won and that’s it. No. This is a much faster moving industry than that.
Finally, do you think that innovation is being suppressed by these companies that buy up smaller companies with new technologies who are their future competitors? It’s a theory I hear a lot these days. Do you think there’s anything to it or is it a case that that’s actually good for new companies because it provides an off-ramp for their investors to get paid out? Any thoughts there before we go?
I don’t find this argument particularly convincing. Yes, tech company exits are part of the engine of company creation. You need to have exits in the system and that’s how it works. I don’t think we see very many companies where you think, ‘oh, no, that would have been a great idea but Facebook squashed it.’ I really don’t see a kind of stifling effect on new company creation. Quite the opposite. The irony here is that the battle to produce cloud services in the form of AWS and Google Cloud and the race for them to create new machine learning capabilities and open source them means that there are vastly more companies being created and it’s vastly cheaper to create a company. Think about Instagram. Instagram had seven or eight people when Facebook bought it? And they had 50 million, 60, 70 million users, something like that? So the cost of creating a company is vastly lower and the ability to innovate, the ability to create new businesses is vastly lower and a lot of that is because of the kind of platform capability that you get from the App Store, that you get from AWS, that you get from Google Cloud, the distribution and the payments and the open source software. There’s a generalized point here that part of the reason tech activity has accelerated is because it was standing on the shoulders of giants. You don’t have to build the primitives yourself. You don’t have to create the building blocks yourself every time you found a company. You can just pick them up. There’s a huge amount of company creation happening, in large part, as I said, because of the existence of these platforms. Because they’re platforms. You can build on them.
There are no comments available.
1789 Massachusetts Avenue, NW, Washington, DC 20036
© 2019 American Enterprise Institute