About the author
Follow @AEIdeas on Twitter
What’s New on AEI
Sign up for AEI Today
For several years now, we have heard economic commentators proclaim the United States had reached “full employment.” And yet the economy just kept adding jobs, and the unemployment rate kept falling.
One economist who has consistently opposed that prognosis is Adam Ozimek, and it appears he’s been vindicated. So I invited him on the show to talk unemployment, wage growth, and all things economic policy. We also cover the trade war, inflation, and Dr. Ozimek’s one out-of-the-box idea to boost US economic growth.
Adam Ozimek is an economist at Moody’s Analytics, where he covers labor markets and other aspects of the US economy, and he blogs at Forbes under the name Modeled Behavior. What follows is a lightly-edited transcript of our conversation. You can download the episode by clicking the link below, and don’t forget to subscribe to my podcast on iTunes or Stitcher. Tell your friends, leave a review.
Over the last couple years, both on your blog and on Twitter, you’ve critiqued some analysts who have been saying for years that we are at full employment. Now it looks like your critiques have been vindicated, because it looks like we have not been at full employment for the last few years. The unemployment rate keeps falling month after month. But with unemployment now below 4 percent, have we reached full employment?
When one looks at the size, scale, and influence of America’s tech titans (companies that jealous Europe would love to have), it’s not surprising to think of them as monopolies. But as competition scholar Nicholas Petit explains in a recent conversation with me, “When you look at what those companies do it seems very different from what the old school textbook monopolist would do.”
They don’t act like fat and happy forever companies with not a competitive care in the world. Such as being in cutthroat competition with other dominant tech titans. As Andreessen Horowitz tech analyst Benedict Evans recently tweeted:
Regardless of your personal preferences around smart screens/speakers for the home, it’s striking that we have 3-4 huge consumer tech companies aggressively competing here. In previous cycles it would’ve been just Microsoft or just a couple of cash-strapped start ups . . . That is people talk a lot about tech monopolies, but we have four huge and dynamic companies that overlap a lot, and when they do they compete with each other on a level that Microsoft or indeed IBM never really had to face.
Here’s another example. Google is the dominant digital advertising platform in the US with an estimated 37 percent share of digital ad budgets in 2018. Facebook has just shy of 20 percenet. Together they have been a strong duopoly. Oh, but here comes Amazon. This from CNBC:
Amazon’s ad business is booming. Some advertisers are moving more than half of the budget they normally spend with Google search to Amazon ads instead, amounting to hundreds of millions of dollars, according to execs at multiple media agencies. . . . Amazon’s growing success could pose a rare threat to Google parent company Alphabet, which generated $95.4 billion in ad revenues last year, 86 percent of its total revenue . . . Nonetheless, Amazon appears to be emerging as the most credible threat to Google’s cash cow advertising business since Facebook conquered mobile advertising beginning shortly after its 2012 IPO. . . . Executives at six media agencies confirmed Amazon is making huge inroads in advertising, supporting the recent eMarketer report that the tech giant has become the third-largest U.S. digital advertising platform behind Google and Facebook. One exec from a large agency said some brands find Google search ads “quaint” and want their budgets moved to Amazon because it directly correlates to sales. About 49 percent of product searches begin on Amazon, according to Survata.
Listen to the two economists who just won Nobel prizes: Tackling climate change is totally compatible with greater prosperity
“There’s no such thing as a free lunch,” the famous and favorite aphorism of Nobel-winning economist Milton Friedman, is sometimes paired with a possible exception. Innovation, particularly technological, appears to be a free lunch because it lets a given output be produced with less input. Or we can produce more with the same amount of capital and labor.
But as new Nobel-winning economist Paul Romer points out, although progress might seem to be free, not quite exactly. Progress doesn’t just happen effortlessly. “We make progress because of things that people do,” Romer has written. “The models and evidence suggest that the benefits we get when people do the things that produce progress are so large, and the resources that it takes to produce the progress we’ve enjoyed are so small, that the progress seems to be free.” (A great summary of Romer-nomics from the scholar himself: “I think that the best strategy for the government is to invest in people. Attract and train very talented people and then by and large give them the freedom to decide what they want to do.”)
Like his fellow 2018 Nobel recipient William Nordhaus (I highly recommend his essay “Why the Global Warming Skeptics Are Wrong”), Romer thinks there must be a public policy response to climate change and that the response doesn’t need to be economically ruinous. The key is getting started. Put even a low tax on carbon and be seriously committed to raising it gradually but inexorably. Then allow human creativity to do its thing as innovators start thinking of clever ways to avoid the tax. As Romer said at the Nobel press conference: “There will be some tradeoffs, but once we begin to produce [fewer] carbon emissions we’ll be surprised that it wasn’t as hard as it was anticipated.”
And he gave a more expansive explanation of his “nearly free” view in a 2016 blog post:
After all the fear and hand-wringing, once we commit to this kind of tax, progress will continue but in a slightly different and much better direction. It will still seem to be free. Our intuition tells us that solving this problem cannot be so easy. But intuition has also been telling us for two centuries that the price of natural resources has to climb as the rate of resource extraction increases. So what are you going to believe? Your intuition or the logic and the evidence?
The natural world revealed to us by the facts of history could have been different. It might have been the case that discovering new extraction technologies is so difficult that firms invest in these new technologies only when they are sure that the price of the resource will be higher. This is not what the facts show. People kept coming up with innovative extraction technologies even though there was no realistic prospect of higher prices. The innovation was not all that difficult, so they did it anyway.
The lesson from resource prices (and from almost every other domain where we’ve looked carefully) is that small incentives can generate lots of innovation. This means that small changes in incentives will encourage more discoveries that are truly beneficial, such as ones that give people what they want without emitting greenhouse gases. These small changes will also discourage the socially harmful discoveries that keep the price of fossil fuels too low. So, there is no basis for complacent optimism and tolerance of bad status quo policies and lots of logical and empirical justification for conditional optimism.
And the argument against doing this is what, exactly? Why make a one-way, all-or-nothing bet with huge potential downside? We can prevent a possible ecological catastrophe while also having a future of great economic abundance. What’s more, none of this takes into account all the possible positive economic spillovers from this wave of technological change.
When folks finally realized that President Trump’s protectionism was more than just campaign rhetoric, out came the references to Smoot-Hawley and the implication that economic disaster was nigh. But not only has a Second Great Depression not appeared (a mistaken assumption based on an inaccurate analysis of the causes of the first one), the American economy seems to have accelerated. And while the stock market might be higher if there were a resolution to the US-China trade battle, investors don’t seem too concerned.
But as the Goldman Sachs econ team points out, the negative impact of tariffs can take time to play out (bold by me):
Our perspective on the short-run macro effects of higher trade barriers on the US economy has been sanguine. This was an easy call earlier this year, when the tariff announcements were tiny relative to US and global GDP. But is the further escalation of the trade war—we now expect tariffs on all US imports from China—a reason to change that assessment? Our latest analysis says no. Higher tariffs will boost inflation, which will weigh on private-sector real income and (potentially) reinforce the trend toward tighter Fed policy. But these effects look relatively small, and they are likely to be partly offset via market share gains by import-competing US producers. All told, we expect a negligible hit to US GDP of 0.1% or less. . . .
Although the financial markets have largely bought into the view that tariffs are a sideshow in the US macro story, many economists are instinctively more pessimistic. This is probably because we have all learned that tariffs weigh on our standard of living by preventing us from fully exploiting comparative advantage, i.e., by diverting scarce resources into the production of goods we should be importing. And it is tempting to equate a lower long-term living standard with weaker near-term cyclical performance. But this is a fallacy, in our view, because long-term resource allocation and short-term resource utilization are two very different things. In fact, devoting more resources to sectors in which we have no comparative advantage can be quite consistent with a cyclical expansion in the short term, even though it is likely to make us poorer in the long term.
If you believe Kai-Fu Lee — the Beijing-based AI scientist, venture capitalist, and former Googler (he ran Google China) — China has the edge in AI, a case he makes in “AI Superpowers: China, Silicon Valley, and the New World Order.”As Lee sees things, China’s internet economy generates lots of data with few restrictions on hoovering it up. China is also massively investing in the sector. In addition, he praises Chinese-style cutthroat entrepreneurship vs. the softer Silicon Valley version, at least as he see it. (This interview with McKinsey gives a good sense of Lee’s views.)
Whether or not Lee is correct in evaluating how the two nations stand in relation to each other, it makes sense for US policymakers to assume more could be done to reduce barriers to AI advancement and encourage the technology’s development. And to that, it’s certainly worth taking a look at “Reducing Entry Barriers in the Development and Application of AI” by Caleb Watney of the R Street Institute. The analysis finds that one goal of policy should be to increase the supply of AI talent and speed AI diffusion by importing more AI talent and allowing companies to better deduct the cost of training AI workers.
On the data side, policymakers encourage the creation of open datasets and data sharing, both in government and in the private sector where regulation currently inhibits sharing. It’s also important to “maintain a healthy ecosystem around distributed platforms” such as Amazon Web Services and Google Cloud. Government must “be careful to avoid data-localization laws [and] excessive privacy laws,” as well as antitrust efforts against those tech companies leading the way in AI research and innovation.
For his part, Lee also thinks immigration is key to America’s AI performance, as well as greater government funding of AI research. As to Lee’s forecast, a cautionary note from The Economist:
True, AI represents the new space race, and China and America are set to lead it. But Mr Lee’s comparative analysis of Chinese and Western capitalism suffers (ironically) from a lack of data. China has had only 30 years’ experience of capitalism since Deng Xiaoping’s reforms took hold: not enough to discover whether its no-holds-barred approach is indeed more efficient than a rules-based system of competition. It took several nasty financial disasters in the late 19th and early 20th centuries for the West to constrain its own worst business practices, the better to let its animal spirits flourish. A financial crash may yet temper Chinese hubris. Despite its futuristic theme, Mr Lee’s book fits into a familiar genre of business scare stories. In the 1960s the French were aflutter about “Le Défi Américain” by Jean-Jacques Servan-Schreiber; in the 1980s Americans were paralysed by Ezra Vogel’s “Japan as Number One”. “AI Superpowers” should be taken seriously. But it is not the final word.
Recall the Facebook “hack” by Cambridge Analytica that wasn’t actually a hack. It really wasn’t, at least not in the sense that the firm somehow penetrated Facebook servers. “Breach” really was the better term if one was somehow limited to a using a single word to describe how the social media giant allowed a third party to harvest user profile data without consent.
Now with what’s happening right now with Google, “breach” is the wrong word, although it’s certainly getting tossed around. Users of Google+ had some profile data “exposed,” meaning it was potentially accessible by third parties although that may not have actually happened.
And it certainly won’t be the last exposure. As tech analyst Ben Thompson points out, “. . . the inevitable reality of software’s bugginess, combined with the vast amounts of data collected by Google and Facebook, are that exposures are inevitable.” And I think for the most part people get this and are willing to accept the risk for the benefits they get from Google, Facebook, and other large technology companies.
But shouldn’t Google have quickly notified users of the exposure? That is not at all obvious, at least if you care about unintended consquences. As Wired writer Lily Hay Newman writes:
The episode also brings renewed urgency to conversations about regulating companies to disclose not just data breaches, but exposed data as well. That could have the unintended consequence of discouraging companies from doing aggressive internal system testing and vulnerability analysis to catch exposures proactively, though. Given the high profile of the Google+ incident, it may become a test-case for how Google and other companies might act in similar situations in the future.
It’s also the case that bugs are features of a tech ecosystem where new companies are arising and innovating and experimenting and improving. There will always be vulnerabilites, whether the company is new or long standing. Much as the call to lock-down user data through regulation could end up further entrenching incumbents, freakouts over data exposure could impede competition. Again, Thompson: “. . . if the only acceptable way to avoid public censure is the complete absence of bugs (as opposed to the demonstrated exploit of those bugs), the end result will be that only the established and impervious — not from bugs, but from competition — can survive.”
Technological progress invites pushback and resistance. Stable managers and horseshoe makers weren’t fans of the newfangled automobile. More recently, taxicab companies have fought Uber and Lyft across America and across the world. Autonomous vehicles will be no different. Already the Teamsters have lobbied Congress to be cautious when considering bills that might speed adoption of self-driving vehicles.
So good news, then, out of Washington. This from Bloomberg:
The U.S. Transportation Department has given a boost to companies working on automated long-haul trucks, saying an artificial intelligence system could constitute a “driver” under federal trucking rules in a bid to ease barriers to the technology. The Federal Motor Carrier Safety Administration will no longer assume that a commercial vehicle driver is human, according to the Transportation Department’s “Automated Vehicles 3.0” guidance released Thursday. That is an initial step to allow trucks to travel across state lines piloted by an autonomous driving system. The safety regulator also signaled a willingness to overrule states standing in the way of self-driving trucks and is also studying how to amend existing rules to better accommodate self-driving systems.
Interestingly, the DOT document — which broadly addresses autonomy regulation — also makes a point of trying to alleviate concerns that humans will eventually be banned from driving. As Tesla boss Elon Musk has put it, “You can’t have a person driving a two-tonne death machine.” Well, maybe not when there could be a spectacularly safer autonomous alternative. And Bob Lutz, a former top executive and design guru at General Motors, predicts most people will use ride sharing, and the remaining personally-owned vehicles “will no longer be driven by humans because in 15 to 20 years — at the latest — human-driven vehicles will be legislated off the highways. The tipping point will come when 20 to 30 percent of vehicles are fully autonomous. Countries will look at the accident statistics and figure out that human drivers are causing 99.9 percent of the accidents.”
Anyway, this from the DOT:
U.S. DOT embraces the freedom of the open road, which includes the freedom for Americans to drive their own vehicles. We envision an environment in which automated vehicles operate alongside conventional, manually-driven vehicles and other road users. We will protect the ability of consumers to make the mobility choices that best suit their needs. We will support automation technologies that enhance individual freedom by expanding access to safe and independent mobility to people with disabilities and older Americans.
The DOT “envisions” and it “embraces,” but it does not guarantee. Nor should it. While there may always be places where humans can drive, there will almost certainly be places where they are banned at some point. But this is hardly a tomorrow thing, and as autonomy can do more, people’s expectations will adjust. But the companies understand it’s a touchy issue and are trying to prevent it from becoming a new front in America’s culture wars.
It wasn’t a bad thing for America when Europe and Japan rebounded economically after World War II. Sure, that meant new global competitors for our companies, but it also meant new markets for our exports, new products for our consumers, and rising living standards for millions of our fellow humans. A better world for more people. So in that sense, I am unbothered by new interactive data from the Center for American Entrepreneurship that finds that the US share of venture capital investment has dropped to 50 percent today from 95 percent in the mid-1990s with half of this decline occurring in the past five years. It’s good that more entrepreneurs with big ideas are getting financed globally. Let a hundred tech hubs bloom amid more research investment, better universities, and a greater effort to cultivate startups. Richard Florida and Ian Hathaway write in a Wall Street Journal op-ed based on that CAE research:
What we are experiencing instead is a collective, multilevel assault on our high-tech dominance from scores of places — principally large, global cities. Venture capital now flows into Shanghai, Beijing and London at a rate that rivals New York and Boston. And other cities, such as Berlin, Paris, Stockholm, Singapore, Bangalore, Delhi, Mumbai and Tel Aviv, are rising fast; they are already on par with Seattle and Austin.
The flipside here is that I want America to remain the world’s economic leader that’s always pushing the technological frontier. And it’s a bad thing if America slowly cedes its leadership — such as measured by these VC numbers — through policymaking self harm such as making our tech hubs unaffordable and our nation inhospitable to immigrants who more than ever have good options at home. (For instance: The South China Morning Post reports that the number of Chinese students returning from abroad “has grown by leaps and bounds. In 2017, 608,000 students went abroad and 480,900 returned … a return rate of 79 percent; in 1987, the return rate was about 5 percent, and in 2007 only 30.6 percent.)
Again, Florida and Hathaway:
For one, we must double down on talent and innovation. We need to continue to invest in our leading universities, to pump out new research and attract the brightest people from around the world. And we must ensure that global talent can stay in U.S. after graduating. We need to be tearing down walls instead of trying to build them up, by making more visas available for students, skilled workers and foreign-born entrepreneurs.
We also need to bolster the local tech ecosystems that enable homegrown entrepreneurs and startups. And we must relax our overly restrictive zoning and building codes to ensure that innovators, entrepreneurs and creatives can afford to build and launch their new companies in our leading tech hubs.
The first step on the road to recovery is to acknowledge that you have a problem. In just two decades, our lead in tech entrepreneurship has been cut in half, and that erosion is accelerating rapidly. If this continues, the next big thing, or things, will be launched in Shanghai, Bangalore, Berlin or Tel Aviv, not somewhere in America.
The US unemployment rate fell to 3.7 percent in September, the lowest rate since December 1969. That’s even lower than the jobless rate during the 1990s internet and productivity boom. Other bits of good news in the report include decent monthly job growth of 134,000 — probably a depressed number because of Hurricane Florence. With upward revisions to the previous two reports, job gains have averaged 190,000 per month over the past three months. Such gains are consistent with “steady declines in the unemployment rate and solid increases in aggregate household income,” according to Barclays. There was also a 0.3 percent gain in average hourly earnings, a tick higher employment rate, and a 420,000 rise in the household measure of employment easily outpaces a 150,000 rise in the labor force.
But what happens next? One should hope events play out today better than after that 1969 milestone. A Federal Reserve history of the 1970s described it as a “turbulent time” for the American economy. And it sure was. The economy slipped into recession after the employment peak, one of four over the next dozen years. Meanwhile, the inflation rate that had begun creeping up in the mid-1960s would become the Great Inflation and hit 15 percent by 1979. From 1966-1982, the stock market fell more than 70 percent in real terms. By decade’s end, America was suffering a “crisis of confidence,” according to President Carter.
Not good stuff. (On Twitter I joked the “3.7 percent unemployment rate is a great omen. Last time it was that low, 1969, what came next was a decade of prosperity and economic stability as American had never before seen.” Not everyone got the joke.)
One way to avoid a repeat is making sure an independent Fed remains intolerant of sustained high levels of inflation, and that the public stays sure the central bank is willing and able to keep doing its job. That, especially at a time of big fiscal stimulus when the economy already appears within walking distance of full employment — or at least where constraints on labor supply start biting. (Longer-term, government should focus on boosting productivity, which downshifted at the start of the 1970s.) RSM economist Joseph Brusuelas cautions that the low jobless rate “is indicative of what will become a growing problem in the current expansionary cycle: there simply are not enough willing and able workers to meet demand, which will result in bottlenecks in housing, manufacturing and agricultural industrial ecosystems going forward.”
So far at least, this current Fed is taking seriously its inflation-fighting mission. Capital Economics notes that the “recent strength of the economy appears to be pushing Fed officials in an increasingly hawkish direction” and as “the boost from fiscal stimulus fades and rising borrowing costs start to weigh more heavily on rate-sensitive activity, an economic slowdown next year will force the Fed to end the current tightening cycle sooner than officials anticipate.”
Economic growth — and the hope of better things to come — is the religion of the modern world. Yet its prospects have become bleak. In the United States, eighty percent of the population has seen no increase in purchasing power over the last thirty years and the situation is not much better elsewhere.
So argues Daniel Cohen in “The Infinite Desire for Growth,” a whirlwind tour of the history of economic growth, from the early days of civilization to modern times. Drawing on economics, anthropology, and psychology, and thinkers ranging from Rousseau to Keynes and Easterlin, Cohen examines how a future less dependent on material gain might be considered and how, in a culture of competition, individual desires might be better attuned to the greater needs of society. He joined me on the show to discuss his argument.
I’m going to start with the first paragraph of your book, which I think gives a pretty nice summary of where you’re coming from. So, let me just take 30 seconds to read a few sentences.
“Economic growth is the religion of the modern world, the elixir that eases the pain of social conflicts, the promise of indefinite progress. It offers a solution to the everyday drama of human life, to wanting what we don’t have. Sadly, at least in the West, growth is now fleeting, intermittent. It comes and goes, with bust following boom and boom following bust, while an ideal world of steady, inclusive, long-lasting growth fades away.”
I think that sets up where you’re coming from and the central attention in your book. But in the first part of the book, you spend quite a bit of time just outlining how we got from there to here — how we got from a world of no growth to a world of growth. That’s a topic we’ve discussed many times on this podcast, but I want to start out by getting your view: What do you view as the most plausible explanation for why the world was very, very poor 200 years ago and is not very, very poor today?