Discussion: (0 comments)
There are no comments available.
A public policy blog from AEI
More options: Share,
Will artificial intelligence (AI) lead to a jobless economy? Can AI stimulate lasting economic growth? Is anti-competitive behavior by big tech slowing the rate of AI innovation? On this episode, Professor Robert Seamans discusses AI’s impact on the economy, how it will change the nature of work, and whether regulation is needed to keep the AI industry competitive.
Robert Seamans is an associate professor at New York University’s Stern School of Business and a former Senior Economist at the White House Council of Economic Advisors. Most recently, he coauthored the chapter “AI and the Economy” with Jason Furman. What follows is a lightly edited transcript of our conversation. You can download the episode by clicking the link above, and don’t forget to subscribe to my podcast on iTunes or Stitcher. Tell your friends, leave a review.
PETHOKOUKIS: Occasionally I will talk to groups about technology and its impact on the economy and I’m pretty optimistic. But then when it comes time for questions, it’s nothing but concerns, worries about jobs, worries about AI creating a surveillance state, concerns China is going to take the lead in AI and they’ll become the world’s dominant power. So it’s far more worries than, “Oh tell me more about how wonderful artificial intelligence will make the economy and our lives. Are you a techno-optimist? If so, could you give me the techno-optimist’s case?
SEAMANS: I am a techno-optimist. I get bummed out when I hear many of the stories about robots taking jobs, AI taking jobs, when I hear a lot of pessimism about the effect that AI will have on the economy. I’m pretty optimistic about the positive effects it will have in terms of growth, and we can get into that. I’m fairly positive about the effects it will have on jobs. I think a lot of the stories are about substitution. I think substitution, AI substituting for human labor is actually much more difficult, much harder than people realize. In fact, I think there’s actually a really good case to be made that there are going to be a lot of complementarities. Again, I’m happy to get into all of that, but the short answer is I’m a techno-optimist.
You have a paper helpfully titled “AI and the Economy,” which you’ve co-written with Jason Furman, former Obama administration economic adviser. You yourself were in the Obama Administration on the Council of Economic Advisers. Jason has been on the podcast before. In the paper, “AI and the Economy,” you write “artificial intelligence has the potential to dramatically change the economy.” I think the first obvious way — and then we’ll get to jobs — is economic growth creating a more productive economy. That’s a constant theme in this podcast that we’ve had this productivity slowdown and if the US economy is going to grow at least as fast, or anywhere close to as fast, in the future as it has in the past, we’re going to have to be more productive and hopefully AI will be part of that story. We haven’t seen it yet though. Assuming that it is going to be as an important technology as what you think it is, when are we going to see that AI-driven productivity boom?
This is a great question. I wish I knew the precise answer to that. I do think that we’re going to see a productivity boost from AI. I think there are two major questions, one is how big is that boost going to be and the other is when is that boost going to happen. In terms of how big is that boost going to be, I think one thing you could do — and one thing that we do in the paper — is that we look at the case of robotics, which has been around longer than AI. There are some economists that have been able to look at robots and use robots to try to measure the effect of these newer technologies on the economy. What the prior work has found is that it has led to about a 10 percent increase in growth and so using that as a rough benchmark, I think it’s reasonable to expect that at some point in the future, AI will boost growth by about 10 percent. Now, when in the future? My guess would be over the next 10 years, let’s say, I think we would start to see that. There are others that might think that would happen a little bit quicker. I think there are others that are maybe just generally pessimistic about when it would happen, but let me just say 10 years. I guess the bigger question is why it hasn’t happened yet. So there’s a really good paper by Erik Brynjolfsson, he’s an economist at MIT, that is trying to get at why we haven’t seen any boost yet from AI. The central thesis of their paper is that in order to see this boost, we need to see the complementary investment in firms or perhaps in training of individuals and things like that before we can see the boost from AI.
That’s a story that the technology is here, obviously it’s always improving and evolving, but we have good enough AI technology now but it is just not being used as much as it could be, it is not diffusing through the economy, there are these bottlenecks maybe at the company level where you just don’t have enough people who know how to use it?
Their story is less about that. It’s more about the sort of complementary investments that need to be made. I guess stepping back for a minute though, I don’t think that we are far enough along as we are perhaps led to believe in terms of the stories that we read in popular media.
In the popular media, it certainly seems like AI and robots are about to take all the jobs, that we’re close to sci-fi levels of AI where you have sentient robots, and what we really need to worry about is a Terminator-like scenario.
Yes, don’t worry about the Terminators. We are so far away from that, I don’t think we need to worry about that. In terms of where we are in terms of the science — so I’m not a computer scientist, but I have been looking at some of this as part of some of the research that I’ve been doing. In terms of the advances in the lab, sort of the basic science of AI, you can date that back to not quite even 10 years ago. We’ve seen just a really big increase dating back roughly eight to 10 years ago in terms of advances in lab settings, and we are only just beginning to see a lot of that be commercialized, sort of turned into commercial applications. One of the things that we do in the paper is we a) try to describe that and then b) also try to provide a little bit of evidence about the speed with which this is starting to be commercialized. In terms of commercial application of AI, you can roughly date back the beginnings of it to maybe four or five years ago when you start seeing a big increase in VC investment in this area. There’s a lot of VC investment in startups in this area, obviously also a lot of investment by large established tech firms like Google, Amazon, and others. But the actual commercial applications are really just beginning to come online. So I think while there’s a lot of narrative out there about what AI can do, fears about what it can do — maybe excitement in some cases about what it can do — the actual commercial applications are really just beginning to roll out.
You mentioned comparing it to robotics as another technology with somewhat similar features and the ability to both replace work as well as complement human labor. Do you think though that AI could be bigger? It seems like it could be more, it could go more places and do more things than robots. So when you talk about 10 percent boost in growth, do you mean that thanks to AI an economy that was growing at 2 percent could grow at 2.2 percent?
That is correct, 2.2 percent.
That may seem insignificant to people, but there’s a lot of other policy interventions that people talk a lot about that don’t give that kind of growth. So again, do you think AI could be bigger than that? Is that a conservative estimate? I don’t want to put words in your mouth.
No, so I use that 10 percent estimate because it’s the best estimate that I’ve got. I think there’s a lot of noise around that, and so where’s the noise coming from? I think it’s coming from two places. On the one hand, I think that you’re right that you might see AI applications across more industries, across more sectors of the economy compared to robots, which we primarily see in manufacturing, and even in manufacturing we primarily see it in the auto sector. I think it’s reasonable to expect that we’ll see AI across more sectors of the economy. You might expect that to lead to a larger overall boost, but on the other hand, it strikes me that a lot of the applications that AI will get used for are so much smaller in magnitude relative to robotics. When manufacturing establishments put robotics into place they’re making a really large capital investment. They tend to rearrange and rethink the production process that they’re doing and all of that leads to this effect. So it strikes me that it’s a larger effect that you’re getting in robotics, but in a smaller sector of the economy. With AI, we’re going to see it across a much wider swath of the economy, but it’s not clear to me that you’re going to be seeing really big changes.
Where would we put you on a scale of optimism about the economic impact? How significant would it be on a scale of being on the low end, maybe Robert Gordon, and on the high-end, me? I’m very hopeful. But compared to Robert Gordon, you seem more optimistic about the impact.
I’m more optimistic than him. I guess I’d like to get your take. Do you think that 10 percent is too low?
Well, I think that very well may be a good first cut, and I think most of these estimates when you make them, you should probably make them with the technology as it currently exists. Of course, when I talk to people from Silicon Valley, they infuse me with a great deal of optimism about where the technology is leading, and I would hope that eventually it could be more than that. Though, of course, as a policymaker, I wouldn’t count on that. As understand it — and correct me if I’ve gotten this badly wrong — when it comes to adopting AI we will see a race with the technology with the degree to which the technology will allow jobs to be replaced versus to the extent to which it creates new jobs and new things for people to do and maybe it will even allow people to do those old things more efficiently. Do I have that kind of right? How would you describe it?
The way I like to think about it is sort of three categories, and we tend to focus on what I’ll call category one and category three and we sort of forget a lot about category two. Category one is where AI replaces an existing job. Category three is where we have some brand new job created that didn’t exist before. I think we’ll have both of those two categories. We’ll have some new jobs, or if you will, new occupations that didn’t exist before. We will have some of those. We will also have some existing occupations that maybe get entirely automated by AI, that get automated away, and so those occupations disappear. I think there’s going to be very little though actually happening in both of those categories. I think, rather, in many existing occupations, the nature of the job or the nature of the sort of tasks that a human does in those occupations will change. That’s really the story of all the complementarities that you’re going to see between AI and what it is that people do in these jobs. I think it’s useful to give a few examples. I’m a professor, so I show up somewhere in an occupational code as a professor and there’s some sort of description of what it is that a professor does. One of the things that I used to spend a bit of time doing was looking over the assignments that I would get and worrying a little bit about whether there was any plagiarism going on. It turns out that now there’s an AI application that helps you with this. There’s something called Turnitin and what Turnitin does is it takes all of the papers that my students hand in and it compares them to each other using some sophisticated natural language processing, and it also compares them to all the prior papers that this app Turnitin has received in the past. Then what it does for me is it flags — and I can set different thresholds for this — the areas where it looks like there might be some plagiarism. And I should say, of course, as a professor at NYU that almost never happens, but sometimes it does. So it’s basically a dumb machine that comes up with a simple prediction about whether there’s some plagiarism that has happened or not. It turns out that the cases that it flags are typically where students have copied and pasted the actual questions that I’ve asked and so those pieces of the text look really similar to each other, so you wouldn’t want a machine to dock the student for that. What the machine can do really well is predict the probability of something happening, but then you want to serve that up to a human who can use common sense and judgment, knowledge of stuff that has happened in the past, to make some decision about what to do. This app has made my life easier, it frees my time up to do a little bit more research, maybe to meet with students a little bit more, and so that’s the complementary boost I think you’ll see in a lot of cases. That’s that middle category, what I was calling category two. I think what we’re going to see is a whole lot of stuff like that across many sectors of the economy where AI helps us in many different ways, ways that we can’t entirely foresee right now. You mentioned the startups or the folks in Silicon Valley that you’re talking with, it’s basically those use cases that they’re thinking really hard about.
It sounds like the scenario which sells a lot of books, that robots take all the jobs and there’s five people who have jobs and their job is owning the robots doing all the work — mass unemployment. It seems like you’re skeptical about that.
I’m skeptical that AI will lead to mass unemployment. That’s not to say we don’t have issues in the labor market in our in our country.
Well, I want to get to that, but one more little forecast. Would it surprise you if as these technologies advance and become more permeated through society, that we had higher unemployment or less labor force participation than we’ve seen in the past because of this, that it might result in somewhat higher structural unemployment or non-work?
I don’t think so. I think AI is going to have very little bearing on net employment. One thing I want to point out is I think it is going to vary a bit by sector, so I think there are going to be probably some sectors where you’ll see AI automating away jobs. On a sector specific basis, you might see some of this, but on net, no, I don’t think that AI will have much effect in terms of increasing unemployment.
Do you think even right now we are seeing impacts? Whether it’s on sectors or particular kinds of workers or age groups, do you think we’re already seeing any impact?
Yeah, so the sectors that I worry about are more sectors like finance, the legal sector, some of those sectors where in the past you had analysts that performed really important tasks. I’m thinking about investment banking, or the research function for sales and trading where you had analysts that performed these roles but where the types of things that they were doing, at the end of the day, it turns out are probably relatively easy to automate. I think it’s those sectors where you’re going to see more of an effect from AI. I think there’ll probably be some newer jobs that get created there. Let me step back for a minute. What I think will happen there is there’s going to be an increase in the use of computer scientists to create pretty sophisticated models about what will happen in different economies, maybe trading strategies around that, and so you’ll see some sort of newer types of jobs in that specific sector created. That will do away with a lot of the research function that maybe some of these analysts did in the past. But I think you’ll also see new jobs emerge around interpreting and explaining the results that come out of the sophisticated trading algorithms that you have. Because you need to somehow describe what that output is to the senior manager who is in charge of a trading strategy at an investment bank. The reason for that is that we’re talking about a lot of money that is perhaps involved in a specific trade, and you’d want to have a human in the loop to make sure that trade made sense.
And to comfort the final decision maker that this isn’t just some black box and to some degree they can understand the interpretation?
I think that’s right. The interpretation piece is key. I think that’s going to be a new occupation that we’ll see, is there’ll be sort of AI interpreters.
It seems to me that news coverage has moved from the jobs issue to another AI race — not the race with robots, but now the race with China. It seems now we are very worried that China is going to win the AI race and have some sort of permanent economic and military and geopolitical advantage over the United States. Again, this is the second time I’ve used the race metaphor. Is that the best way to think about it, that you have countries racing against each other to become the leader in AI? I have problems with that. To me it seems a little either/or.
Here’s the way I think about it. I would leave it to a scientist, like a computer scientist, to worry about pushing the boundaries of AI and what AI can do. Maybe China will beat out the US in terms of the basic science. I think what I care more about are the commercial applications of AI. It is not at all clear to me that the Chinese approach would somehow lead to the best commercial applications, even if they are more advanced than the US in terms of what AI can do. It’s the commercial application piece that I think is the more important piece.
I have heard just the opposite argument made that the US may do great research, but China will actually do a better job commercializing it. Again, I’m not fundamentally sure that I understand this argument. I don’t know if it is partly a big data argument, that they have so much more data, so they’ll actually be able to take the basic science and use it better and create better products. I’m not so sure about that, do you think how our economy is structured is more likely to make us competitive?
I do. I think that this is something that this administration, as well as prior administrations, has right about this, which is that to a certain extent you want the market to sort out which applications are best and which ones aren’t. I like that approach. It is an approach that has worked in the well in the past and I don’t see a reason why it wouldn’t work well again in this case. That being said, I think the one other issue that comes up when we talk about a race with China or with any other country — and I think it is a really important issue to bring up as it pertains to AI and Science in general — is the issue around immigration. I don’t know if we want to go there and that’s maybe not quite the subject of this podcast, but I think if we really do want to make sure that we are leaders in AI, then we need to think really hard about the ways in which our immigration policy can make sure that happens.
What would be the key aspect there of the immigration policy?
We want to make sure that the best and brightest scientists and the best and brightest entrepreneurs, to the extent that they aren’t actually born in the US, that they want to move to the US and that we allow them to move to the US. We want to make sure that our immigration policy reflects that.
Another issue that I see a lot more coverage of now is concern about the Big Tech companies. Last week as we’re recording this, Elizabeth Warren put a plan to break some of them up to some extent. The common argument against these folks is that they are missing the point that we’ve heard this before, that there are these companies which seem super dominant, even in the technology field, and then it turns out they get usurped and disrupted by somebody else — Yahoo!, Myspace, Nokia — so we should just sort of let the market work its magic and these forever companies will face challengers. They might still be around, but there will be new products and new services competing with them and everything will be fine. Their response is that something has changed, that AI and the data that’s used to fuel AI, is now controlled by those companies. So, the Big Tech companies will have the best AI and therefore these companies will be sort of unassailable and therefore something fundamentally is different about what’s going on now. How much truth is there to that argument?
This is something that I sort of think about a fair bit. It’s something that I worry about. I would worry a lot if it was the case that startups had a really hard time entering and either competing against the large tech companies. I worry about that. There’s maybe a little bit of evidence that some of that has been happening. There’s not a ton of evidence though.
I think something you hear about is that one, you can’t compete with them because they have all the data and they have the best AI. Then the other half of that is there’s a ‘kill zone’ and they scoop up these small companies and therefore you’re hurting innovation because they never get big enough to compete or their technology sort of gets subsumed into these big companies. Again, anecdotally people can point, but I’m not sure there’s actual evidence — systemic, empirical evidence of that.
It certainly seems like there are a lot of startups that have entered, perhaps part of the reason why they’ve entered is they are hoping to be bought up by Google or Facebook, Microsoft, or one of the other larger tech companies. One of the things that I’ve been doing is surveying AI-enabled startups to try to get a sense of how much access to data matters for these startup firms or how much access to AI or the computer scientists that can do AI programming matters. It turns out that there are a lot of ways to get data, and so this doesn’t seem like it’s a really big barrier for these startup firms. It seems like one of the ways — and this is anecdotal at this point, but we’re trying to dig in a little bit on this — that firms have been trying to get around the lack of access to data is by using different types of algorithms. For example, they use what are called Bayesian algorithms that rely on less data, as a way to try to get around a potential lack of access to data. It strikes me that there might be some barriers to entry that startups have, but these startups are pretty clever and they are finding ways around it, so I’m not that worried about lack of entry from startups. That sort of partially answers your question. There’s perhaps another piece of your question, which is around how worried should we be about these large tech firms and to what extent are they stifling innovation?
It seems like three years ago, we weren’t worried at all. Now, it seems like at least some people are extraordinarily worried that they’ve sort of morphed from being one of the crown jewels of the American economy to one of the biggest problems in the American economy.
It is interesting how quickly it has switched. Just flipping the question on its head, Big Tech companies should be pretty worried about this. It seems to me that they are getting this from both sides of the political spectrum, the progressives that are questioning the role of Big Tech and then the folks in the current administration that are also questioning the role of Big Tech. Across the Atlantic, we also have European regulators that are increasingly looking at Big Tech. Big Tech should be worried. In terms of how worried I am, this is going to sound like a typical academic’s answer. I wish I could come in here, Jim, and just be really forceful on this issue, and I’m not going to be. I think that this is an area where I have some concerns, but I haven’t seen enough evidence to be overly concerned about it. I like the idea that the FTC has created a task force.
A big report just came out in the UK, with Jason Furman as a co-author, about what to do about the large technology companies.
That’s right, so I think it’s useful to scrutinize what has happened, what has been happening to debate about this. I haven’t yet arrived for myself at a very definitive place.
Is there any sort of policy intervention that you’re confident enough would enhance competition but that wouldn’t hurt innovation? One of the things the UK report mentioned is data portability, moving your data or your social graphs from this social media platform to help bootstrap another social media company. Are you comfortable enough with that being something that should be looked hard at, mandating that?
We are starting to do that with banks, and so that’s potentially a lesson that we’re learning from banks that you can do stuff like that. I think it’s an area to look at. The other lesson that we have from banks is that you don’t have to have the same regulation for all sizes of companies. One of the things that you worry about in the tech sector is you don’t want to come up with a regulation because you are worried about what the Big Tech firms are doing, and then that regulation somehow negatively impact what startups can do. I think it’s fine to say we can actually have a tiered regulatory system where if you’re maybe designated, I don’t want to say too big to fail, but you’re a really big tech firm, you could have one set of regulations that’s different than the regulations for a small startups. Lawyers, of course, are licking their chops when I say that, but I think it’s okay to think about regulatory solutions that aren’t the same for everybody.
We’re at the very end here, so sort of the last question. Your current automobile, do you think that will be the last car you ever buy that will not have a high level of autonomous technology in it? You must talk to people, so what is your feeling? Right before we went on, I saw a poll saying 70 percent of people now are very frightened of the idea of autonomous technology, but it does seem to be coming despite those kinds of sentiments. How close do you think we are to widely available cars that in some cities, at some times, in some weather conditions, allow you to plug in a destination and then take a nap?
We are already seeing that. Let’s see, within five years, I think most cars being sold will look like that. I think most people will not have those features turned on most of the time. In terms of the future that some people envision where we have fully autonomous vehicles running around all through the streets and no drivers anymore, I think that’s a total fantasy. Coming back to the jobs point, truck drivers comes up a lot. I think we could be at a point in let’s say in 10 years where long-haul truck driving is automated, but that there’s still a driver who’s sitting in the cab, perhaps monitoring things as the trucks moving along at a high speed. I think the role of a driver in a city, that’s not going to change. I would guess that would not change in my lifetime. Being a little more specific, short-haul truck drivers in city environments just do a ton of work that has nothing to do with driving, in terms of loading and unloading and things like that. Even the driving function that a lot of those folks do in a city environment, there’s so many things that pop up that it’s just really hard for me to envision any type of autonomy around that.
There are no comments available.
1789 Massachusetts Avenue, NW, Washington, DC 20036
© 2019 American Enterprise Institute