Discussion: (0 comments)
There are no comments available.
A public policy blog from AEI
In the new book, WTF: What’s the Future, and Why It’s Up to Us, founder and CEO of O’Reilly Media Tim O’Reilly argues Silicon Valley and the innovation it’s fostering can be either a fount of amazement or a source of dismay. The direction technology leads our society is ultimately up to us, the policymakers and the public they represent. He joined the podcast to discuss how Americans should respond to the coming changes, and whether our government is up to the task.
Somewhere in the middle of the book there’s a nice passage which reached out to me. You wrote: “When I was a kid, I read a science fiction novel a day for a year. And for so long the future was a disappointment to me. We achieved so much less than I hoped. Yet today I see progress toward many of my youthful dreams.” So that’s what a techno-optimist sounds like.
You’re in Silicon Valley, I’m here in Washington, DC; I don’t sense the techno-optimism out here. What I sense is a lot of concern about where technology is taking us — whether they are concerns about privacy, job loss, too much political power, AI is going to kill us all, these big platforms are influencing the political system in a way that we might not want, etc. But when I talk to people in Silicon Valley, obviously it’s a much different feeling. So why do you think there’s so much negativity, and do you think your techno-optimism is out of style?
Well first of all, the title of the book implies two things – WTF can be an expression of amazement and delight, and it can an expression of amazement and dismay. And we do see both with technology. That optimistic passage is in the middle of an arc of storytelling in which I am trying to say: what do we learn from great technology platforms? And we learn, first of all, that the innovation which gives rise to them typically begins with acts of generosity. We also see that in economics; think about that post World War II boom that began with the investments that we made in people — the Marshall plan rebuilding our enemies, sending all returning GI’s back to school — in total contrast to what happened after World War I which led to decades of devastation.
And so, that’s sort of the first thing: that generosity matters to platforms as well. And as the platforms become extractive — as Microsoft did, as maybe Facebook and Google are starting to be now — they start to lose their vitality and they get a pushback. But there’s also a big, important thing that we learn from platforms, particularly today, about the fact that they are designed; in some ways, they are designed economies. And we have been coming out of a period in which we kind of act as if the market is a natural phenomenon.
I love when Mariana Mazzucato wrote “Markets are outcomes.” We set rules and some of the outcomes are intended and some of them are unintended. And we’re in this wonderful teachable moment, it seems to me right now, with technology; many of the things that we’re afraid of are actually opportunities for us to rethink the world.
So just take Facebook for example. We can see that Facebook’s algorithms were meant well; they basically said “we’re basically going to show people what they want, we’re going to show them what their friends do and the things that they like we are going to give them more of, the things that they re-share we’re going to give them more of, the things that they comment on, we’re going to give them more of.” And then we saw that actually amplified hyper-partisanship. They didn’t mean to do that, and we expect them to try to fix it now.
Now we didn’t expect that our economic paradigm was going to create an opioid epidemic, that it was going to lead to outsourcing that would hollow out our economy and get rid of jobs. So, why aren’t we trying to fix that? Why are we still expecting that paradigm? And the perfectibility of our expectations that we see in technology, I think is something we need to apply to public policy.
So there are a lot of ideas floating around. You mentioned specifically these big platform companies Facebook, Google, Microsoft, Apple, Amazon — I think The New York Times calls them the “Frightful Five” — so who is going to fix them?
In a good chunk of your book, you are talking about these companies. Do you feel like the people in Washington you talk to have a deep understanding of what makes these companies different than, say, GM? That public policymakers are even close to being able to even fix these companies, whatever those fixes mean?
Not at all, and in fact that’s one of the reasons I wrote the book. I’m trying to educate people about how these things work. I tell a story of a German economic minister who invited me to speak at a conference on the digitalization of the economy, and we had a private lunch afterwards. He basically said, “Oh the only reason that Uber works is because they don’t have to follow the rules,” and I said, “Have you ever ridden in an Uber?” He said, “No I have my own driver,” And in my mind, I go “How clueless can you be?”
I still remember the same things from back in the Google Books protests, where I’m arguing with some lobbyist from the publishing industry who had never used Google. And it’s just like, people in the government have got to start using this technology, they got to understand that, because the government is an essential counterweight to companies. But you can’t basically come from a 19th century or even a 20th century understanding of how the world works and try to regulate how the 21st century works.
Uber gets mentioned quite a bit in the book. I think people just don’t understand the business model because I often hear from people, “Well cab companies just got to get an app, they just got to get an app too like Uber. And then they can compete, and who knows maybe Uber should have to license out their app in some way.”
But I’m pretty sure that cab companies are designed completely differently — for one, they seem to have a fixed number; they can’t just surge people all of a sudden when traffic is bad and it’s raining out, that can’t happen. They are constructed completely differently.
That’s right. The very thing they hate most about Uber or Lyft is the key to the business world. I was just having breakfast with Paul Romer, and we were talking about how we have to reinvent work. These are amazing new technologies that let us marshal human resources to solve problems in new ways. And we’re using all these old, tired policy recipes. One of the things that I talk about at length in the book is the way that Uber and Lyft teach us that the old model of, for example, W2 versus 1099 employees, doesn’t match work. Because even the W2 jobs have been hacked.
Using algorithms, the scheduling algorithms that they use at the GAP or Walmart or McDonalds are designed to make sure that the people don’t get full-time work so that they don’t have to pay benefits. So, this idea that somehow the W2 job is better than the 1099 job — why aren’t we just saying “well if we can track and marshal all these people with this networked algorithm, why can’t we do their benefits in the same way?” Portable benefits is a policy idea that’s implied here.
I want to go back to when you mentioned that policymakers are not quite getting it. Again, we have people at AEI, we have different people from Silicon Valley coming out, and they seem so sure, for instance, that autonomous vehicles are on their way and we need to be thinking about policies and what safety nets look like, regulation, and all the second- and third-order effects.
Yet I remember sitting on a panel with a US senator talking about vehicles and the senator said “You know, I just don’t get it, I don’t get it, I mean, people love their cars, people love their pickup trucks.” I think this senator still sounded like I was talking about building starships and warp-engines and something just fantastical. To what extent do you get that sort of feeling from policymakers?
Oh absolutely, there’s this huge lag among policymakers. Now, there are people who understand the future and who are trying to shake policy, but they are few and far between. And the thing that’s really interesting about the moment, I think, is that we have an opportunity to break some of the old logjams where ideas go in lockstep together and start to forge a new coalition of people who are thinking differently about the future, who understand, for example, that markets are powerful but the way we have designed markets is actually hostile to humans.
I have an extended trope in the book about how the financial markets are the first rogue AI. Effectively, we have told them what to do. We understand Google says optimize for relevance, Facebook says optimize for engagement, our financial markets — what do we tell them to optimize for? — optimize for corporate profits, treat people as a cost to be eliminated. So, are we surprised at what’s happened in the economy? What if we make different rules for that algorithm?
You talk a bit about public policy in the book as well as ideas about how to deal with technology — whether it’s regulation, or updating and modernizing our safety net — do you think those fall neatly into the left-right boxes that everyone is dying to put these ideas into?
Absolutely not, I think that we need to reinvent politics and policy for the 21st century in the same way that we expect our technologists to reinvent things. We so often have a kind of framing blindness where we rethink the world in terms of the familiar. We did connect the taxi cabs in 2005, only three years before Uber came along, and it was a screen in the back of the cab showing ads. And then somebody goes, “Wait, we can do this whole new thing — matching people up in real-time using their smartphones.”
And that kind of breakthrough, which we see periodically — we see it with Amazon Echo, you know suddenly you can have a speaker that’s listening to you all the time and it gives you this access to a new world, or self-driving cars — we could do the world differently, so let’s do policy differently too. And the thing that I think is the biggest change, first of all, is for policymakers to realize that the world is changing and we can do it differently, that it’s subject to our control. But that control is going to require change, deep change.
We basically go through periodic waves in which we rethink economic policy. Starting with the founding of this nation, we decided we didn’t want to be ruled by a king anymore. George Washington didn’t declare himself king of America after we won the revolution. George III was apparently quite surprised by that. But think about the New Deal with Franklin Roosevelt — there were all kinds of new ideas that were expressed. And I feel like we are at the point where the old ideas are clearly breaking, but we haven’t yet had the courage to say, “Let’s take up something bold and new, and rethink from the ground up how we will build a better society.”
Are you sure we’re going to need bold innovative ideas? I mean the easiest thing to say is that more or less, the future will be like the past, that it’s really never different, that we always think it’s going to be different, that we have had disruptive technology before, but things have worked out. Granted we have to educate people better, but eventually people found jobs. We were able to find things for people to do, and eventually everybody was better off.
That sort of scenario, generally economists are far more likely to agree with that than technologists. Is that scenario wrong, that it’s all just going to work out? Or do we need big, bold, sweeping proposals like universal basic income or something like that?
No, I don’t think we need that. When I say rethinking in a big way, I don’t necessarily mean things like Universal Basic Income; that may or may not be a good idea. I mean thinking about, for example, what are we going to invest in education? What does that actually mean for today?
Better iPads for the students maybe? It doesn’t seem to be much beyond than that.
Yeah, right. One of the fabulous statements that I quote in the book is this one from Hal Varian, Google’s chief economist. He said, “If you want to understand the future just look at what rich people do today.”
I remember that and you said that when people heard that they were aghast: “That sounds terrible, that doesn’t sound very equal.”
But think about it, what are rich people doing? Let’s see. They send their kids to schools with very small class sizes, they have concierge medicine, and I think both of those things are possible with technology. And we would put lots of people to work if we had said, “Oh yeah, we want all of our kids to have this environment where they have access to knowledge where teachers are mentors. So, we are going to need to bring all of these kinds of people in and we are going to do school for ordinary people like we do for rich people.”
What would that look like? Take health care: There are already people working on concierge medicine in health care. All the pieces are there. Paul Farmer is doing this in Haiti for Christ’s sake with community health workers. Now, imagine a community health worker upskilled with AI and telemedicine, and the equivalent of Google Glass, able to make house calls, because we already know that intervening with the heavy users before they show up in the emergency room actually saves money in the system.
We could completely rethink the structure of our health care, and instead we’re kind of going, “Well how are we going to pay for the same bloated, inefficient system the way it always worked?” And that’s what I mean. Uber didn’t say, “Let’s do away with taxis,” they said, “Let’s figure out how we can do that thing more efficiently.”
How can we do education more efficiently? How do we think about work? How do we deploy people to work on things that need doing when there’s clearly a market failure? I heard that Larry Summers once refuted the efficient market hypothesis by saying, “There are idiots, look around,” and I refute the jobless future the same way; there’s work to be around, look around — we’ve got crumbling infrastructure, we have so many things wrong in our society. What’s keeping us from working on them?
The jobless future now seems to be one of the chief concerns — people are worried about where technology is going. What you are talking about — rethinking education, rethinking all these things — people say, “Well listen, you’re two steps forward. The first thing we need to do is look at those big companies which have amazing power, amazing wealth, and we either need to regulate them (whatever that means), we need to break them up (whatever that means).” And I am wondering, if your book came out 6 months from now, would there be a lot more of that in it since that seems to be everyone’s concern? It’s like they don’t even want to hear about these next steps, they are so focused on thinking this is a problem right now, even though perhaps these companies are the only things that have gone well in this economy for the past 10 years.
So, do you want these companies to do something differently on their own, or do you think that we need to either highly regulate them or somehow break them up into Googlettes, like Google North, Google South, and so on?
You know, I don’t actually think that they should be broken up. In the book actually, I talk a lot about the responsibilities of the platforms to their ecosystems. And I think there’s an enlightened self-interest. I believe that Goldman Sachs used to call it long term greedy — how do we make things better for the long term as opposed to the short term? And these platform companies really do matter in the economy and they need to understand that it’s not good for them to take too much of the value. They should be thinking, “Am I feeding my ecosystem? Am I continuing to grow?”
So the job of regulators, for example, might be to measure that. We find in tech that you get what you measure. And what if we, for example, had new metrics for the contribution of — there’s a concept out of energy accounting called Sankey diagrams measuring how energy flows through the economy. For example, my son-in-law has done some amazing Sankey diagrams for the entire US. How does value circulate through companies and to whom? You know, the economics profession has been focused on the idea that we need to incentivize production, and I think we actually need to understand how value circulates and gets redistributed. And that to me is a clear focus for economic research and policy research, because our fundamental problems today are not the production of more value, it’s actually the distribution of value — so it gets to all the right places in the economy, the distribution of work.
And some of these people are talking about stopping companies from buying small companies that might eventually one day be their competitors, forcing them to license out their patents, somehow share data — is all that stuff, whatever the specifics, sort of in the realm of what you think is acceptable?
Oh absolutely. I do think that we shouldn’t just look at tech. I mean, one of the stories that is entirely analogous and very relevant is InBev, the big beverage company buying up and doing vertical integration in the hops market, and squeezing out the craft brewers. And when I think about the future of the economy, a big part is the creative economy. These small companies that are making these unique products, it’s less important if InBev buys them and says, “We’re going to build a flourishing economy in which there are lots of craft brewers and we’re going to be a platform to support this creative economy,” that’s great.
But if they say, “Actually, we’re going to squeeze out this small economy, and we’re going to commoditize these guys,” that idea actually seems like user-hostile behavior, economically-hostile behavior. And so, understanding the distinction between bigger as a supportive platform for economic activity, and getting bigger as an extractive platform which suppresses economic activity is critical. We need to understand what kinds of things encourage . . .
It’s in the title of the book — “It’s up to us.” So I think, for me, what can I do to make sure that we have public policy that is supportive of people and not just companies? But if you look at the current political system, things aren’t getting done, you can easily see the constraints.
So, what are the constraints on businesses acting differently? Because obviously, they’re acting in a certain way; their incentives force them to act in a certain way. How do those incentives change? And why will they behave differently in the future if the past has turned them into a $500 billion company?
I think that one of the big turns where our economy went wrong was when we started listening too much to Michael Jensen, and the whole idea of aligning CEO pay with stock market performance; it led to short-termism. And I would actually try to rein that in, and there might be some interesting ways to do it. I think about the way, for example, retirement plans — you can’t have them top-weighted, you basically have to get participation — so, you go “yeah, you can give stock to your CEO, but you have to give stock to everybody and it has to be in a meaningful proportion.”
And right now, Silicon Valley likes to pat themselves on the back for their broad-based stock plans because everybody gets them, but the dirty secret is every level down you go in the organization, you get a full order of magnitude less. So, it seems incredibly top-weighted. And everyone pats themselves on the back about how much wealth is created. Well it’s created wealth for a much smaller number of people than it could.
And is that something that government must do as a regulatory change? Or are they going to adopt them as best practices?
No, I don’t think it’s going to come as best practices. I think in that same way that we did have these sorts of Cadillac health plans, we had Cadillac retirement plans. And there was some intervention that said, “No you actually have to democratize this, you can only have this much of the difference between the top and the bottom.” If President Trump really meant what he said about helping the middle class and helping working class people instead of just using it as a slogan — if we had a real progressive approach — we would be looking at things like that.
We would be saying, “Okay, how do we make broad-based prosperity? What are the techniques?” And there is so much to learn from tech about that — Google, Facebook, Amazon — they are running constant experiments informed by data to improve their products. Yet, policymakers — we try a policy, we set it going, we’ll study it and maybe 10 years later we will go and revisit that.
I have a chapter in the book which talks about this outcome-based approach to regulation where we say, “What’s the goal? And how we get a constant data feedback to see if we’re achieving that goal?” As opposed to developing a policy, assuming that it’s right, and never actually checking back until much much later.
I do want to talk more specifically about the jobs since that is the feedback I always get when I talk and write about these topics. We had economist Daron Acemoglu on the podcast, and we were talking about innovation that replaces jobs versus innovation that creates jobs and enables people to do jobs and maybe do them differently, and innovation that creates complex tasks that people can do well for higher pay.
But when I say that, I always get this question from the audience saying, “How many of us can do complex tasks? How many of us can do all these creative tasks?” Is there a limit? Are we going to be creating all these jobs which a large percentage of the population just can’t do, and so they are the ones who are going to be playing video games?
I don’t really buy that. I mean, first of all, here’s a great example — you may have heard about “the knowledge.” It’s short for the knowledge of the streets of London. I mean it’s an incredibly complex exam that black cabbies would have to train for multiple years to be able to master, like being a human GPS. It’s like, “Given this point in London, and this other point in London, give us the turn-by-turn directions to get from the one point to the other.” And now anyone can do that because we have used technology to upskill people to what was formerly a complex task. And guess what, we’re putting a lot more people to work doing that job.
Now, imagine how we start doing that in health care. We say, “We’re going to use technology to upskill people so that more people can do things that a doctor can do.” And I go wow, we can put millions of people to work delivering better health care at lower costs using automation. Or even in a smaller way, although this does not quite fit Daron’s idea of complex tasks, but look at what happened when Amazon, from 2014 to 2016, put 45,000 robots in their warehouses; they packed in more products, they didn’t say they were going to do the same thing — “just cut the cost” — they said, “We’re going to do more, we’re going to have more products that we can get out for next-day delivery.” And now they are doing same-day delivery in a lot of different zip codes. And as a result, they hired 250,000 more people, not to mention the hundreds of thousands of on-demand delivery drivers driving through Amazon Flex, which nobody talks about and it’s actually heading to being as big as Lyft.
Another frequent topic that I write about is the paradox in which there seems to be all these unicorns, fast-growing companies, technologies, yet all the official statistics say that we’re stuck in this sort of stagnant economy growing at 2% year after year. How do you try to explain that paradigm? Do you think the numbers are missing something? Is the future here but just not widely enough distributed, because it’s all new superstar companies and they are more advanced than the rest, and it’s going to take a while for that technology to diffuse? There seems to be a disconnect there.
Well first of all, I do buy the superstar companies idea, that frontier firms actually end up paying people more and this does diffuse through the economy. But I think the fundamental reason is that we’ve got a lot of hoarding in our economy. Basically, the divergence of productivity, which has actually continued on up and the average household income — it means that our economy isn’t growing because we haven’t circulated the money enough; people don’t have enough money to spend.
We talk about the lack of aggregate demand. We keep optimizing for the wrong thing, for the old thing — when we had inflation and inflation was eating away at capital and capital had to be preserved, and we needed to reinforce the benefits of capital. Now we are awash in capital and people don’t have enough money. And so I think we need to change the fundamental optimizations; we need to find ways to get more money in circulation in the hands of people.
It’s an interesting point. In the book you mention Clayton Christensen, who we’ve also had on the podcast to talk about issues like short-term thinking. At one point, he had been writing that maybe we need to tweak executive pay, maybe we need to tweak the tax code. And I asked him about that, and he said, “I know I wrote that but now I am not sure, maybe I was just thinking about things all wrong, maybe there’s a problem in the business schools where they think we are still sort of in an old economy where the capital is very scarce.” Is the problem just that our mental framework of what the world looks like in 2017 is still sort of stuck in what it looked like in 1977?
Exactly, I think you’ve nailed it. I think that’s absolutely right. And there are so many failures. I feel like we talk like we want to change, but then we go back to the same old tired recipes. Again, even things like Universal Basic Income. I think it will be really interesting to say, “Okay, let’s look around, let’s make sure that we look at all the work that needs doing.” Much work is just generally underpaid or not paid.
We know that we’re going to have more and more aging people. We are going to need to have a caring economy, so how might we say, “Well we’re going to have a handout.” Again, it might be a perfectly reasonable thing to do, but it might also be reasonable to say we’re going to have a work support program where we’re going to create work, and we’re going to fund work that needs doing. We see that there are lots of people in need of care, so we’re going to fund jobs in that area, you know like a work program. We’re going to have training programs, we’re going to teach you to be the neighborhood superintendent so to speak, because people can’t afford to look after their homes. We can teach people that the skills of plumbing or whatever — there are so many things that we could be paying people to learn.
Yeah, I think it’s frustrating that rather than the focus being on workers and work, the focus is on jobs, and preserving that job, that task in its current form, and then the policy ideas just flow from that. Trump was giving a speech in Pittsburgh last year where he talked about coal and steel; he didn’t talk at all about other things happening in Pittsburgh — that the economy had changed into a more service-oriented economy, with jobs in health care, Uber, Google, etc. Again, he was in Pennsylvania the other day, with another shot to talk about the new economy and the challenges of automation, but instead he spoke about truckers.
Truckers, which is the one occupation which always gets mentioned as being under threat from autonomous vehicles. He had a group of truckers there and he just talked about a tax plan that would give them $4,000 or something. He said nothing about where the industry is going or where the jobs are going. If you’re a trucker your kid will never be a trucker because those jobs won’t be there. In all different ways, he ignored it. It’s a backward way of looking at these issues.
I totally agree. And you know the truckers are such a great example because the big elephant in the room that no one is asking is, who will own the trucks? Because the assumption that they will put everyone out of work is based on the assumption that they will be owned by some big company.
Or that Uber will own all the autonomous vehicle companies.
And I actually tackle this a little bit in the book. If you really understand Uber’s business model, you realize how fatuous some of the statements are like, “Well they will get rid of all the drivers’ costs and the cars will be utilized all the time.” Those two things don’t go together. The essence of what they have is they pass off a lot of the costs to the drivers, first of all, so yeah, they will reduce those costs with self-driving vehicles; but they will have to have enough cars to meet peak demand. Which means those cars will be empty a lot of the time.
And so the interesting thing about that is that leads in the direction where actually, they will probably end up migrating to an Airbnb kind of model, where the individuals own the self-driving cars and provide them to the service, or there will be fleets with companies who’ll provide them — but it won’t be them. Their expertise is going to be in dispatch.
To me, it seems to be like kind of fake news that they’ve been promulgating — “you guys should be giving us higher value because self-driving cars will make us much more efficient and profitable in the future” — rather than a real understanding by anybody of this fundamental question. For example, if you design for interoperability of self-driving cars, rather than if Elon Musk is saying that if you have a self-driving Tesla, you could only drive for a Tesla service.
That does not seem like a sustainable position to me.
No, it doesn’t look like a sustainable position. But it’s the kind of thing which ought to be slapped down. Because we ought to be saying, if these are interoperable, if they can be owned by people, if they can be provided to a service, then it becomes an asset that people can use as an income-providing capital asset. And that would be an interesting world, at least as a thought experiment, to say, “How would we move in that direction rather than one that favors the development of big companies that own all these vehicles and people are cut-out of the process.”
We started with a big question, now I am going to end with a big question. I mean there is no bigger question than this, take as long as you want to answer it: Where should and where will Amazon put that second headquarters?
Oh boy, I have no idea where they will.
Since I live in Northern Virginia I am biased, but I hope that they come to Northern Virginia and not only raise the price of my house, but also demand massive public transportation upgrades.
But I don’t think they should come here, quite honestly; this is already a very wealthy area. I would like to see them go somewhere it would really make a difference. The challenge, of course, is that they do want to have a workforce. I don’t know if you have had Jim Bassett on the show, but you really should.
Oh, we have had him at AEI for a panel.
Because his idea — that you have to have a fully developed workforce in place to be truly productive I think is important. So, I think it’s going to be in a big university town somewhere, I would guess, for that level of their workers.
But I would actually rather answer a different question. I mean everyone probably knows that Alphabet’s Sidewalk Labs unit is trying to think about the city of the future. And they are talking about building now I think some place outside Toronto. Where should they be building their city of the future? And I have a very clear answer to that one because — if they really said who needs a city of the future — it’s the tens of millions of refugees that we have in the world.
They should be finding the biggest numbers of refugees who are going to get stuck somewhere, where they are not going to be able to go home, they are still going to be there 20–30 years from now, as it’s often been the case with past refugee migrations. Because here’s this amazing opportunity to both do good and actually build the city of the future for the people who need it.
In the book you say, one of the big questions you have to ask is — what are the big problems you want to work on? That to me that would certainly be a big problem.
There are no comments available.
1789 Massachusetts Avenue, NW, Washington, DC 20036
© 2018 American Enterprise Institute