About the author
The most prominent piece of legislation, sponsored by Reps. Bill Huizenga, R-Mich., and Scott Garrett, R-N.J., is a result of a yearlong series of hearings orchestrated by Hensarling’s committee. It would require the Fed to set interest rates based on something like the Taylor Rule, a formula written by Stanford economist John Taylor, which specifies the appropriate level of interest rates based on the pace of inflation and the gap between actual and potential economic output. The Fed would be required to explain deviations from its rule, although it could change the rule. It would also be subject to audits by the Government Accountability Office.
Would the Taylor Rule be the proper rule for the Fed? Economist Paul Dales of Capital Economics outlines his doubts (as well as nicely explaining the Taylor Rule):
John Taylor’s original rule stated that the nominal interest rate should equal the neutral real interest rate (set at 2%) plus inflation, plus the gap between inflation and a target rate (again set to 2%), plus the output gap. Both the inflation and output gap terms are given equal weightings of +0.5, meaning that if the inflation gap and/or the output gap are positive, the actual interest rate should be above the neutral rate. In the long-run, Taylor’s rule assumes that the nominal neutral interest rate is 4%. Using that rule, the Fed should have started raising interest rates in early 2012.
There are two reasons, however, why the Fed has not followed this rule. First, the rule assumes that the neutral real rate is 2% when most evidence suggests it is now lower. The Fed believes that the neutral real rate will rebound to 1.75% in the long-run, but suspects it will remain substantially lower for some time. Our analysis suggests that, due to the fall in the economy’s potential growth rate, the higher cost of financial intermediation, increased risk aversion and higher precautionary saving, it will remain closer to 1.0%.
Second, Taylor’s rule does not take into account the possibility that there is more slack than the unemployment rate and most output gap measures suggest. We’re not convinced by this, but it is a big part of the Fed’s thinking. In light of the declining participation rate, the Fed could even stop responding to the falling unemployment rate altogether and instead focus on the employment-to-population ratio.
Since employment-to-population ratio remains well below its historical trend, it indicates that there is still plenty of slack in the labour market. An alternative policy rule that includes the employment-to-population ratio implies that the first interest rate hike won’t come until 2017.
What Anat Admati, a Stanford University finance professor, is saying about megabanks shouldn’t be controversial: make these institutions less likely to (again) implode and crash the US economy by making them less reliant on borrowed money for lending. Or to flip it around, megabanks should have to raise six times as much of their funding in the form of equity as they currently do. Now as Admati told New York Times reporter Binyamin Appelbaum, her 30% equity target isn’t a hard number:
She freely concedes that there is no particular science behind her 30 percent equity figure. The point, she says, is that 5 percent is the wrong ballpark. The proper baseline, in her view, is what the market imposes on other kinds of companies. “We have too much belief that we can be precise,” she said. “I don’t mean 20 percent. I don’t mean 30 percent. I mean add a digit. I mean a lot more.”
The megabanks are no fans of this idea. They argue “holding” more equity capital would increase funding costs, lower return on equity, and force them to cut back on lending. But imagine how much stronger the US economy would be today if we had avoided the Great Recession. As Charles Calomiris and Allan Meltzer note in a Wall Street Journal op-ed earlier this year, all the big New York banks with 15% equity or more made it through the Great Depression, and that the “losses suffered by major banks in the recent crisis would not have wiped out their equity if it had been equal to 15% of their assets.”
Anway, Admati and Martin Hellwig counter all the common objections in their “The Bankers’ New Clothes: What’s Wrong With Banking and What to Do About It.” And a paper from which the book is derives offers this chart:
And this from the Appelbaum piece:
A 2010 analysis funded by the Clearing House Association, a trade group, concluded that an increase of 10 percentage points in capital requirements would raise interest rates by 0.25 to 0.45 percentage points. This, in the view of Ms. Admati, is a small price to pay for fewer crises. She notes that debt is cheaper than equity largely because of government subsidies — not just deposit insurance but also tax deductions for interest payments on other kinds of debt — so more equity would basically transfer costs from taxpayers to banks. Even in the short term, she says, the economic impact may well be positive. A study last year by Benjamin H. Cohen, an economist at the Bank for International Settlements, found that banks with more capital tended to make more loans.
The alternative to weather-proofing the megabanks with equity capital is, what, relying on regulators and politicians? Yet four years after the passage of Dodd Frank,the feds recently found the “living wills” submitted by the 11 most complicated megabanks to be totally inadequate. The documents fail to, as the FDIC’s Thomas Hoenig puts it, “convincingly demonstrate how, in failure, any one of these firms could overcome obstacles to entering bankruptcy without precipitating a financial crisis.”
Are higher equity requirements the magic bullet? I don’t know. Calomiris and Allan Meltzer suggest additional options such as requiring banks to maintain cash reserves at the Fed and “contingent capital” funding requirement where a special debt would convert to equity whenever “the market value ratio of a bank’s equity is below 9% for more than 90 days.” Another intriguing idea comes from economists Atif Mian and Amir Sufi, authors of “House of Debt.” They advocate a new kind of “risk sharing” mortgage contract where falling home prices would reduce payments and principal for borrowers, and lenders would share in the capital gains from rising prices. What all these policies have in common is creating a less debt-driven and risky financial system. That idea combined with smarter macroeconomic stabilization policy by the Fed — such as nominal GDP targeting — might mean the most recent economy-shattering financial shock could be our last.
View related content: Pethokoukis
The future US economy won’t just reward smarts, argues economist Tyler Cowen in “Average is Over,” it will also value softer skills such as reliability, conscientiousness, self discipline, consistency. And this piece from the WSJ back him up (h/t to TC himself):
As we’re reminded over and over again, the job market for computer whizzes, actuaries and mechanical engineers is blazing hot.
But while understanding how to code in the Python programming language or run complex Excel models is a great thing, the truly winning formula is to possess both hard technical skills and softer social skills. And the premium for having that killer combo is rising, according to a new paper from the Institute for Social, Behavioral and Economic Research at the University of California, Santa Barbara.
Thirty years ago, the worker who was above average on both dimensions earned about 3% more than the worker who was above average on one or the other dimension, according to Catherine Weinberger, the paper’s author and a research economist at UCSB. Since the year 2000, the differential has grown to about 10%, she said. The paper will be published in a forthcoming edition of the Review of Economics and Statistics.
“What I’m seeing is a slow, steady trend over 30 years,” said Weinberger. “It looks like most of the change happened between 1975 and 1995, and since then, things have flattened out to a new equilibrium that’s persisting.”
Back in 2011, there was a lot of concern about the eurozone debt crisis killing the weak US recovery. Here is analysis from late that year by the San Francisco Fed: “Prudence suggests that the fragile state of the US economy would not easily withstand turbulence coming across the Atlantic. A European sovereign debt default may well sink the United States back into recession.”
Then the crisis subsided. In July 2012, European Central Bank boss Mario Draghi vowed to do “whatever it takes” to preserve the euro within the bank’s mandate. Don’t fight the ECB. With aggressive monetary easing a possibility. Spanish and Italian government borrowing costs fell. But there wasn’t much of an economic recovery from a double–dip recession. Unemployment remains highs, growth slight. And the headlines this week have been terrible. Italy has fallen back into recession,and Germany — the region’s most important economy — might be close. With a regional recovery that Draghi calls “weak, fragile and uneven,” a triple-dip downturn seems a real possibility.
Oh, and by the way, the debt problems have not gone away. AEI’s Desmond Lachman, who says “time is running out” for the ECB to adopt a Federal Reserve-like, bond-buying program, offers this context:
According to Eurostat, by the end of the first quarter of 2014, the public debt to GDP ratio had reached as high as 174 percent in Greece and more than 130 percent in Ireland, Italy, and Portugal. More troubling yet, those ratios showed little sign of stabilizing with those ratios having risen over the past year by 15 percentage points in Greece and by 5 percentage points in both Italy and Portugal.
So, yeah, a flare-up of the eurozone crisis is a plausible possibility. And how would that affect the US economy, which seems stuck in 2%ish growth mode? Is it really significantly less fragile today than it back then? I wonder. After all, a bad winter led to a sharp 2.1% drop in first quarter RGDP. As Lachman adds, ” … we should start bracing ourselves for another, and possibly yet more virulent, round of the European sovereign debt crisis once the Federal Reserve starts to raise interest rates next year.” But maybe the Fed holds off raising rates in that scenario. The next two years could be awfully choppy ones for an Obama presidency, which has yet to get the economic rebound it always expected.
Without reform, too many US schools will continue to prepare kids to be 20th century factory workers
Pew has a really fascinating survey looking at how the tech community sees automation affecting the labor force. More on it later, but this bit really jumped out at me:
Howard Rheingold, a pioneering Internet sociologist and self-employed writer, consultant, and educator, noted, “The jobs that the robots will leave for humans will be those that require thought and knowledge. In other words, only the best-educated humans will compete with machines. And education systems in the U.S. and much of the rest of the world are still sitting students in rows and columns, teaching them to keep quiet and memorize what is told to them, preparing them for life in a 20th century factory.”
And there are plenty of educators who would keep schools in factory mode. AEI’s Rick Hess:
In short, progressives worked hard to import the best practices of private industry to American education. (This is why the familiar school model bears such an uncanny resemblance to the early 20th-century factory.) That model made some sense at the time, helping to manage a massive expansion of schooling in a world lacking modern data tools and communications technology.
Since that era, though, K-12′s routines and rules have been largely preserved, as if in amber. Intrusive regulations, petty bureaucracy, and balky decision making have bizarrely come to be treated as part of the schoolhouse culture.
In the private sector, meanwhile, old giants like Univac, TWA, and Xerox have given way to Google, JetBlue, and Apple. These new ventures had the freedom to build brand new cultures, staffing models, evaluation systems, and delivery models that took full advantage of evolving talent, tools, and technology.
In schooling, this passing of the baton is absent. Instead, leaders inherit long-standing schools or school systems. As successive generations of entrepreneurs and thinkers in other sectors have revisited basic assumptions and built wholly new organizations, educational leadership preparation has clung to aged norms. Indeed, those championing more flexible, creative, and quality- and cost-conscious leadership have been pilloried for pursuing “corporate-style school reform.”
Today’s education leaders too often find themselves ill-equipped to negotiate a world marked by profound changes in what we ask of schools, the labor market, and the available tools and technology. These changes have created new challenges and vast new opportunities. Given that, there’s little reason to expect that century-old assumptions about how to organize and deliver schooling are necessarily the smartest way forward.
Global birthrates are declining, but not fast enough for some environmentalists and climate-change worriers. A new piece by New York Times economics columnist Eduardo Porter suggests one way to reduce carbon emissions is by reducing population growth. Porter writes. “As the threat of climate change has evolved from a fuzzy faraway concept to one of the central existential threats to humanity, [some scholars] have noted that reducing the burning of fossil fuels might be easier if there were fewer of us consuming them.” And he quotes one expert as saying:
“There is a strong case to be made that the world faces sustainability issues whether it has nine billion people, seven billion people or four billion people,” said John Wilmoth, who directs the United Nations Population Division. “Nobody can deny that population growth is a major driving factor, but in terms of the policy response, what are you going to do?”
First, we may be closer to zero global population growth than many realize. The UN projects the current world population of roughly 7.2 billion will rise to 9.6 billion by 2050 and then to 10.9 billion in 2100. But demographer Sanjeev Sanyal of Deutsche Bank thinks the UN is way off. His calculations points to a population peak around 2055 of 8.7 billion, declining to 8.0 billion by 2100 — a level 2.8 billion below the UN’s prediction.
Second, Duardo’s piece plays into the view that the way to deal with climate change is through less — less population, less energy. The reality is that we are a high-energy planet. And going forward, we are going to need more energy, not less, as we bring more of humanity out of poverty and into the middle class. We are going to need, as the Breakthrough Institute puts it, cheaper, cleaner, more abundant energy.
Third, what is the deal with the left and population growth? Phil Longman, author of The Empty Cradle, addresses the issue in a 2007 interview:
It’s fair to say that most self-described “progressives” don’t agree with me that low fertility is a problem. Many environmentalists, for example, believe that fewer people means a cleaner environment. Other progressives suppose that a decline in population would increase the amount of food and other resources available to the poor. Many feminists, gays, and “childless by choice” people in general feel threatened by suggestions that society needs more children. And when it’s pointed out that the lowest birthrates are generally found among the most “progressive” people, then the conversation gets really heated.
On all these counts, I believe progressives are in denial. Today in the United States, for example, we have far cleaner air and water than we did in the 1940s, when the population was just half its current size. That’s no paradox. Population growth is a spur to more efficient and cleaner use of resources, so our cities are no longer choked with smoke from steam engines and our cars get far better mileage and are far less polluting. Similarly, population growth is what drove us as a society to find far more productive ways to grow food. Thanks to increased crop yields, per capita food production is higher than ever, even as world population surpasses 6 billion. At the same time, there is more forested land in the United States than in the 19th century because so much less acreage is needed for farmland.
Progressives also tend to forget that many of their positions on human reproduction, such as a “woman’s right to choose,” only won widespread support when fears of overpopulation began to pervade the culture in the 1960s and ’70s. Until then, bans on abortion, birth control, and homosexuality, for example, were justified in many people’s minds by fears of underpopulation, which left questions of human reproduction too important to be settled by individual “choice.” They also forget that if progressives themselves “forget to have children” then the future belongs to people who have opposing values. Finally, progressives forget that without a growing population, such “crown jewels” of the welfare state as Social Security lose their financial sustainability.
In his 2011 State of the Union address, President Obama dreamily depicted a future, just 25 years hence, where almost all Americans would have easy access to high-speed rail. “This could allow you to go places in half the time it takes to travel by car. For some trips, it will be faster than flying –- without the pat-down.”
I think we’ll see a space elevator before America gets a nation-spanning bullet train system. The New York Times finds that “despite the administration spending nearly $11 billion since 2009 to develop faster passenger trains, the projects have gone mostly nowhere and the United States still lags far behind Europe and China.” Reporter Ron Nixon cites experts who fault the Obama administration for spreading that dough around rather than focusing on key projects like improving Acela Express service in the Northeast Corridor, “the most likely place for high-speed rail.”
Acela averages just 80 mph between Washington and New York, although the trains are capable of going twice as fast. Old infrastructure and rail-sharing slows it down. According to the piece, “a plan to bring it up to the speed of Japanese bullet-trains, which can top 220 m.p.h., will take $150 billion and 26 years, if it ever happens.”
Of course, the US is a lot different than many nations with high-speed rail, nations which have “higher population densities, higher gas prices, higher rates of public-transportation use and lower rates of car ownership.” Not that bulleteers doubt a high-speed future will happen:
But Andy Kunz, executive director of the U.S. High-Speed Rail Association, thinks the United States will eventually have a high-speed rail system that connects the country. “It’s going to take some years after gas prices rise and highways fill up with traffic,” he said. “It’s going to happen because we won’t have a choice.”
Wait, we have no choice but to build high-speed rail because the highways are going to “fill up with traffic”? Let me bring your attention to this chart from Paul Kedrosky:
It depends on where you are as to whether traffic’s declining, but national statistics have shown that per capita travel in vehicles is roughly where it was in the late 1990s. And vehicle miles traveled, the number of miles that cars are moving is roughly where it was in the early 2000s. And this is after a 90-year increase in the amount of automobile traffic, from, you know, the 1910s to the early 21st century.
So people have sort of this expectation that traffic will continue to increase because it has increased in the past for such a long period of time. And this is built into traffic forecasts. It’s built into the way people view the world. But beginning in the early 2000s, in particular after 9/11, with a number of societal changes, including things like increased gas prices, changing demographics, changing employment, the amount of travel that people were engaging in individually has leveled off and has declined on a per capita level.
Now, a lot of technologies have a lifecycle. They have an S curve associated with them. So they start off, they grow slowly even, there’s a period of very rapid growth. Then it levels off. And then something new happens and the S curve begins to decline. And so we sort of see that in a number of things that we no longer use as much we used to. U.S. mail volume increased for decades upon decades until the 1990s. And it started to level off in the 1990s with the rise of email and the Internet, and then, in the early 2000s has fallen off a cliff.
So is that going to happen with travel? And so this is the scenario that I’m painting. And so it’s a future scenario. I don’t want to say that I predicted that this would happen, but this is one thing that might happen that nobody is taking any account of right now.
And here is a good piece by Tim Worstall on the impact of potential impact of driverless cars on high-speed rail. Technology will change how America’s gets around. But more likely it will be 21st century technology, not that of the 1970s.
The US economy has added 200,000 jobs or more for six straight months. That hasn’t happened since 1997. Job gains during the streak have averaged 244,000 monthly. Back in 1997, gains for the entire year averaged 283,000 monthly (the equivalent of 345,000 with today’s larger population.) Improving jobless claims numbers could be hinting at acceleration. RDQ Economics:
Initial jobless claims were below forecasts falling 14,000 to 289,000 in the week ending August 2nd. The four-week average of claims declined 4,000 to 293,500, the lowest level since February 2006. … Unemployment claims are signaling a potential pickup in net job creation (from an already fairly strong pace) and a further drop in the unemployment rate. The four-week average of claims has dropped for five straight weeks, to its lowest level since February 2006, and the insured unemployment rate has been at 1.9% for four consecutive weeks, which is within 0.1 percentage point of the cycle low for the last expansion (which occurred in 2006-07 when the unemployment rate was around 4½% and the short-term unemployment rate was between 3½% and 4% (currently at 6.2% and 4.2%, respectively).
Most jobs added in this recovery have been full-time jobs. But the level of part-time work remains above prerecession levels even as the labor market overall has slowly recovered. As the Atlanta Fed’s Ellyn Terry points out, “Today, there are about 12 percent more people working part-time than before the recession and about 2 percent fewer people working full-time hours.”
But why? Again, Terry:
Weak business conditions and the increase in the relative cost of full-time employees have been about equally important drivers of the increase in the use of part-time employees thus far. Thinking about the future, firms mostly cite an expected rise in the relative cost of full-time workers as the reason for shifting toward more part-time employees. So while there are some clear structural forces at work, a large amount of uncertainty around the future cost of health care and the future pace of economic growth also exists. The extent to which these factors will ultimately affect the share working part-time remains to be seen.
My fellow The Week columnist John Aziz responds to my piece advocating the death of the corporate income tax with this: “Hey, conservatives: I’ll trade you the corporate tax for a tax on pollution.”
Now Aziz more or less agree with my point that corporate taxes are bad for economic growth. But he finds my call to eliminate them to be fanciful:
Abolishing the corporate tax is an evergreen conservative talking point, a piece of red meat to be thrown periodically to the base, like arguing that life begins at conception or demanding a repeal of ObamaCare. The thing is, no matter how much energy is expended (and sometimes not even from conservatives), ditching the corporate tax altogether has not yet been achieved. Conservative, liberal, and centrist politicians have all kept it intact, albeit while lowering it substantially from almost 50 percent after World War II to a little under 20 percent today. … cutting the corporate tax to zero right now would blow a big hole in the government’s finances — currently 10 percent of the tax base, or the not-so-paltry sum of $280 billion.
He misunderstands my plan. I would pay for any revenue loss by raising taxes on investment income. Not only does this eliminate budgetary concerns, but makes the plan more egalitarian than simply axing the tax. Note that many liberals have called for equalizing labor and capital income tax rates. So my plan actually isn’t a piece of red meat and thus far more likely to happen.
And here is Aziz’s counter:
But there’s no reason why that cannot be made up by other taxes on things that we actually want to disincentivize. Like, rather importantly, pollution. Corporations should be taxed to some degree for the negative side effects they create, like pollution and environmental degradation. But it’s not like all corporations are polluting at the same rate. Most firms create far less pollution than the owner of coal-fired power stations, for example. If pollution is the problem, tax the polluters directly for their pollution and environmental degradation. Tax carbon emissions by the ton. That will also have the benefit of further incentivizing the development of clean energy, which is recognized as the best antidote to climate change. Don’t tax every corporation at the same rate — lower the tax rates for those polluting at a much lower rate.
Economists love this idea. But given the intense political standoff over climate change, I would argue this is far less likely to happen than my idea. How about instead of replacing the corporate tax, have the carbon tax replace all manner of energy subsidies and regulation. Back in 2011, several AEI scholars illustrated one way a carbon tax might work:
Subsidies for ethanol and other alternative fuels would be abolished (basic research on renewable energy would be funded on the same stringent terms as other basic research). As discussed above, business and household energy tax credits would be abolished. Regulations designed to lower greenhouse gas emissions would be repealed.
Instead, a tax on greenhouse gas emissions (“carbon tax”) would be imposed. The tax would be similar to Revenue Option 35 in the Congressional Budget Office’s March 2011 Budget Options book, but would be implemented as a tax rather than as a cap‐and‐trade program. The tax would take effect in 2013 and be phased in at a uniform pace over five years, so that the 2017 tax equaled the level prescribed for that year in the CBO option, slightly more than $26 per metric ton of CO2equivalent. As prescribed in the CBO option, the tax would thereafter increase at a 5.6 percent annual rate through 2050.
Also, Northwestern University’s Monica Prasad argues that Demark’s experience with carbon taxes is instructive in avoiding pitfalls:
Unless steps are taken to lock the tax revenue away from policymakers and invest in substitutes, a carbon tax could lead to more revenue rather than to less pollution. … If we want to reduce carbon emissions, then we should follow Denmark’s example: tax the industrial emission of carbon and return the revenue to industry through subsidies for research and investment in alternative energy sources.” Indeed, approximately 40% of Danish carbon tax revenue is used for environmental subsidies, while the other 60% is returned to industry.