Friday, October 13, 2017

Rethinking Macroeconomic Policy

I had the pleasure of attending “Rethinking Macroeconomic Policy IV” at the Peterson Institute for International Economics. I highly recommend viewing the panels and materials online.

The two-day conference left me wondering what it actually means to “rethink” macro. The conference title refers to rethinking macroeconomic policy, not macroeconomic research or analysis, but of course these are related. Adam Posen’s opening remarks expressed dissatisfaction with DSGE models, VARs, and the like, and these sentiments were occasionally echoed in the other panels in the context of the potentially large role of nonlinearities in economic dynamics. Then, in the opening session, Olivier Blanchard talked about whether we need a “revolution” or “evolution” in macroeconomic thought. He leans toward the latter, while his coauthor Larry Summers leans toward the former. But what could either of these look like? How could we replace or transform the existing modes of analysis?

I looked back on the materials from Rethinking Macroeconomic Policy of 2010. Many of the policy challenges discussed at that conference are still among the biggest challenges today. For example, low inflation and low nominal interest rates limit the scope of monetary policy in recessions. In 2010, raising the inflation target and strengthening automatic fiscal stabilizers were both suggested as possible policy solutions meriting further research and discussion. Inflation and nominal rates are still very low seven years later, and higher inflation targets and stronger automatic stabilizers are still discussed, but what I don’t see is a serious proposal for change in the way we evaluate these policy proposals.

Plenty of papers use basically standard macro models and simulations to quantify the costs and benefits of raising the inflation target. Should we care? Should we discard them and rely solely on intuition? I’d say: probably yes, and probably no. Will we (academics and think tankers) ever feel confident enough in these results to make a real policy change? Maybe, but then it might not be up to us.

Ben Bernanke raised probably the most specific and novel policy idea of the conference, a monetary policy framework that would resemble a hybrid of inflation targeting and price level targeting. In normal times, the central bank would have a 2% inflation target. At the zero lower bound, the central bank would allow inflation to rise above the 2% target until inflation over the duration of the ZLB episode averaged 2%. He suggested that this framework would have some of the benefits of a higher inflation target and of price level targeting without some of the associated costs. Inflation would average 2%, so distortions from higher inflation associated with a 4% target would be avoided. The possibly adverse credibility costs of switching to a higher target would also be minimized. The policy would provide the usual benefits of history-dependence associated with price level targeting, without the problems that this poses when there are oil shocks.

It’s an exciting idea, and intuitively really appealing to me. But how should the Fed ever decide whether or not to implement it? Bernanke mentioned that economists at the Board are working on simulations of this policy. I would guess that these simulations involve many of the assumptions and linearizations that rethinking types love to demonize. So again: Should we care? Should we rely solely on intuition and verbal reasoning? What else is there?

Later, Jason Furman presented a paper titled, “Should policymakers care whether inequality is helpful or harmful for growth?” He discussed some examples of evaluating tradeoffs between output and distribution in toy models of tax reform. He begins with the Mankiw and Weinzierl (2006) example of a 10 percent reduction in labor taxes paid for by a lump-sum tax. In a Ramsey model with a representative agent, this policy change would raise output by 1 percent. Replacing the representative agent with agents with the actual 2010 distribution of U.S. incomes, only 46 percent of households would see their after-tax income increase and 41 percent would see their welfare increase. More generally, he claims that “the growth effects of tax changes are about an order of magnitude smaller than the distributional effects of tax changes—and the disparity between the welfare and distribution effects is even larger” (14). He concludes:
“a welfarist analyzing tax policies that entail tradeoffs between efficiency and equity would not be far off in just looking at static distribution tables and ignoring any dynamic effects altogether. This is true for just about any social welfare function that places a greater weight on absolute gains for households at the bottom than at the top. Under such an approach policymaking could still be done under a lexicographic process—so two tax plans with the same distribution would be evaluated on the basis of whichever had higher growth rates…but in this case growth would be the last consideration, not the first” (16).

As Posen then pointed out, Furman’s paper and his discussants largely ignored the discussions of macroeconomic stabilization and business cycles that dominated the previous sessions on monetary and fiscal policy. The panelists acceded that recessions, and hysteresis in unemployment, can exacerbate economic disparities. But the fact that stabilization policy was so disconnected from the initial discussion of inequality and growth shows just how much rethinking still has not occurred.

In 1987, Robert Lucas calculated that the welfare costs of business cycles are minimal. In some sense, we have “rethought” this finding. We know that it is built on assumptions of a representative agent and no hysteresis, among other things. And given the emphasis in the fiscal and monetary policy sessions on avoiding or minimizing business cycle fluctuations, clearly we believe that the costs of business cycle fluctuations are in fact quite large. I doubt many economists would agree with the statement that “the welfare costs of business cycles are minimal.” Yet, the public finance literature, even as presented at a conference on rethinking macroeconomic policy, still evaluates welfare effects of policy using models that totally omit business cycle fluctuations, because, within those models, such fluctuations hardly matter for welfare. If we believe that the models are “wrong” in their implications for the welfare effects of fluctuations, why are we willing to take their implications for the welfare effects of tax policies at face value?

I don’t have a good alternative—but if there is a Rethinking Macroeconomic Policy V, I hope some will be suggested. The fact that the conference speakers are so distinguished is both an upside and a downside. They have the greatest understanding of our current models and policies, and in many cases were central to developing them. They can rethink, because they have already thought, and moreover, they have large influence and loud platforms. But they are also quite invested in the status quo, for all they might criticize it, in a way that may prevent really radical rethinking (if it is really needed, which I’m not yet convinced of). (A more minor personal downside is that I was asked multiple times whether I was an intern.)

If there is a Rethinking Macroeconomic Policy V, I also hope that there will be a session on teaching and training. The real rethinking is going to come from the next generations of economists. How do we help them learn and benefit from the current state of economic knowledge without being constrained by it? This session could also touch on continuing education for current economists. What kinds of skills should we be trying to develop now? What interdisciplinary overtures should we be making?

Thursday, September 28, 2017

An Inflation Expectations Experiment

Last semester, my senior thesis advisee Alex Rodrigue conducted a survey-based information experiment via Amazon Mechanical Turk. We have coauthored a working paper detailing the experiment and results titled "Household Informedness and Long-Run Inflation Expectations: Experimental Evidence." I presented our research at my department seminar yesterday with the twin babies in tow, and my tweet about the experience is by far my most popular to date:

Consumers' inflation expectations are very disperse; on household surveys, many people report long-run inflation expectations that are far from the Fed's 2% target. Are these people unaware of the target, or do they know it but remain unconvinced of its credibility? In another paper in the Journal of Macroeconomics, I provide some non-experimental evidence that public knowledge of the Fed and its objectives is quite limited. In this paper, we directly treat respondents with information about the target and about past inflation, in randomized order, and see how they revise their reported long-run inflation expectations. We also collect some information about their prior knowledge of the Fed and the target, their self-reported understanding of inflation, and their numeracy and demographic characteristics. About a quarter of respondents knew the Fed's target and two-thirds could identify Yellen as Fed Chair from a list of three options.

As shown in the figure above, before receiving the treatments, very few respondents forecast 2% inflation over the long-run and only about a third even forecast in the 1-3% range. Over half report a multiple-of-5% forecast, which, as I argue in a recent paper in the Journal of Monetary Economics, is a likely sign of high uncertainty. When presented with a graph of the past 15 years of inflation, or with the FOMC statement announcing the 2% target, the average respondent revises their forecast around 2 percentage points closer to the target. Uncertainty also declines.

The results are consistent with imperfect information models because the information treatments are publicly available, yet respondents still revise their expectations after the treatments. Low informedness is part of the reason why expectations are far from the target. The results are also consistent with Bayesian updating, in the sense that high prior uncertainty is associated with larger revisions. But equally noteworthy is the fact that even after receiving both treatments, expectations are still quite heterogeneous and many still substantially depart from the target. So people seem to interpret the information in different ways and view it as imperfectly credible.

We look at how treatment effects vary by respondent characteristic. One interesting result is that, after receiving both treatments, the discrepancy between mean male and female inflation expectations (which has been noted in many studies) nearly disappears (see figure below).

There is more in the paper about how treatment effects vary with other characteristics, including respondents' opinion of government policy and their prior knowledge. We also look at whether expectations can be "un-anchored from below" with the graph treatment.



Thursday, September 14, 2017

Consumer Forecast Revisions: Is Information Really so Sticky?

My paper "Consumer Forecast Revisions: Is Information Really so Sticky?" was just accepted for publication in Economics Letters. This is a short paper that I believe makes an important point. 

Sticky information models are one way of modeling imperfect information. In these models, only a fraction (λ) of agents update their information sets each period. If λ is low, information is quite sticky, and that can have important implications for macroeconomic dynamics. There have been several empirical approaches to estimating λ. With micro-level survey data, a non-parametric and time-varying estimate of λ can be obtained by calculating the fraction of respondents who revise their forecasts (say, for inflation) at each survey date. Estimates from the Michigan Survey of Consumers (MSC) imply that consumers update their information about inflation approximately once every 8 months.

Here are two issues that I point out with these estimates:
I show that several issues with estimates of information stickiness based on consumer survey microdata lead to substantial underestimation of the frequency with which consumers update their expectations. The first issue stems from data frequency. The rotating panel of Michigan Survey of Consumer (MSC) respondents take the survey twice with a six-month gap. A consumer may have the same forecast at months t and t+ 6 but different forecasts in between. The second issue is that responses are reported to the nearest integer. A consumer may update her information, but if the update results in a sufficiently small revisions, it will appear that she has not updated her information. 
To quantify how these issues matter, I use data from the New York Fed Survey of Consumer Expectations, which is available monthly and not rounded to the nearest integer. I compute updating frequency with this data. It is very high-- at least 5 revisions in 8 months, as opposed to the 1 revision per 8 months found in previous literature.

Then I transform the data so that it is like the MSC data. First I round the responses to the nearest integer. This makes the updating frequency estimates decrease a little. Then I look at it at the six-month frequency instead of monthly. This makes the updating frequency estimates decrease a lot, and I find similar estimates to the previous literature-- updates about every 8 months.

So low-frequency data, and, to a lesser extent, rounded responses, result in large underestimates of revision frequency (or equivalently, overestimates of information stickiness). And if information is not really so sticky, then sticky information models may not be as good at explaining aggregate dynamics. Other classes of imperfect information models, or sticky information models combined with other classes of models, might be better.

Read the ungated version here. I will post a link to the official version when it is published.

Monday, August 21, 2017

New Argument for a Higher Inflation Target

On voxeu.org, Philippe Aghion, Antonin Bergeaud, Timo Boppart, Peter Klenow, and Huiyu Li discuss their recent work on the measurement of output and whether measurement bias can account for the measured slowdown in productivity growth. While the work is mostly relevant to discussions of the productivity slowdown and secular stagnation, I was interested in a corollary that ties it to discussions of the optimal level of the inflation target.

The authors note the high frequency of "creative destruction" in the US, which they define as when "products exit the market because they are eclipsed by a better product sold by a new producer." This presents a challenge for statistical offices trying to measure inflation:
The standard procedure in such cases is to assume that the quality-adjusted inflation rate is the same as for other items in the same category that the statistical office can follow over time, i.e. products that are not subject to creative destruction. However, creative destruction implies that the new products enter the market because they have a lower quality-adjusted price. Our study tries to quantify the bias that arises from relying on imputation to measure US productivity growth in cases of creative destruction.
They explain that this can lead to mismeasurement of TFP growth, which they quantify by examining changes in the share of incumbent products over time:
If the statistical office is correct to assume that the quality-adjusted inflation rate is the same for creatively destroyed products as for surviving incumbent products, then the market share of surviving incumbent products should stay constant over time. If instead the market share of these incumbent products shrinks systematically over time, then the surviving subset of products must have higher average inflation than creatively destroyed products. For a given elasticity of substitution between products, the more the market share shrinks for surviving products, the more the missing growth.
From 1983 to 2013, they estimate that "missing growth" averaged about 0.63% per year. This is substantial, but there is no clear time trend (i.e. there is not more missed growth in recent years), so it can't account for the measured productivity growth slowdown.

The authors suggest that the Fed should consider adjusting its inflation target upwards to "get closer to achieving quality-adjusted price stability." A few months ago, 22 economists including Joseph Stiglitz and Narayana Kocherlakota wrote a letter urging the Fed to consider raising its inflation target, in which they stated:
Policymakers must be willing to rigorously assess the costs and benefits of previously-accepted policy parameters in response to economic changes. One of these key parameters that should be rigorously reassessed is the very low inflation targets that have guided monetary policy in recent decades. We believe that the Fed should appoint a diverse and representative blue ribbon commission with expertise, integrity, and transparency to evaluate and expeditiously recommend a path forward on these questions.
The letter did not mention this measurement bias rationale for a higher target, but the blue ribbon commission they propose should take it into consideration.

Friday, August 18, 2017

The Low Misery Dilemma

The other day, Tim Duy tweeted:

It took me a moment--and I'd guess I'm not alone--to even recognize how remarkable this is. The New York Times ran an article with the headline "Fed Officials Confront New Reality: Low Inflation and Low Unemployment." Confront, not embrace, not celebrate.

The misery index is the sum of unemployment and inflation. Arthur Okun proposed it in the 1960s as a crude gauge of the economy, based on the fact that high inflation and high unemployment are both miserable (so high values of the index are bad). The misery index was pretty low in the 60s, in the 6% to 8% range, similar to where it has been since around 2014. Now it is around 6%. Great, right?

The NYT article notes that we are in an opposite situation to the stagflation of the 1970s and early 80s, when both high inflation and high unemployment were concerns. The misery index reached a high of 21% in 1980. (The unemployment data is only available since 1948).

Very high inflation and high unemployment are each individually troubling for the social welfare costs they impose (which are more obvious for unemployment). But observed together, they also troubled economists for seeming to run contrary to the Phillips curve-based models of the time. The tradeoff between inflation and unemployment wasn't what economists and policymakers had believed, and their misunderstanding probably contributed to the misery.

Though economic theory has evolved, the basic Phillips curve tradeoff idea is still an important part of central bankers' models. By models, I mean both the formal quantitative models used by their staffs and the way they think about how the world works. General idea: if the economy is above full employment, that should put upward pressure on wages, which should put upward pressure on prices.

So low unemployment combined with low inflation seem like a nice problem to have, but if they are indeed a new reality-- that is, something that will last--then there is something amiss in that chain of logic. Maybe we are not at full employment, because the natural rate of unemployment is a lot lower than we thought, or we are looking at the wrong labor market indicators. Maybe full employment does not put upward pressure on wages, for some reason, or maybe we are looking at the wrong wage measures. For example, San Francisco Fed researchers argue that wage growth measures should be adjusted in light of retiring Baby Boomers. Or maybe the link between wage and price inflation has weakened.

Until policymakers feel confident that they understand why we are experiencing both low inflation and low unemployment, they can't simply embrace the low misery. It is natural that they will worry that they are missing something, and that the consequences of whatever that is could be disastrous. The question is what to do in the meanwhile.

There are two camps for Fed policy. One camp favors a wait-and-see approach: hold rates steady until we actually observe inflation rising above 2%. Maybe even let it stay above 2% for awhile, to make up for the lengthy period of below-2% inflation. The other camp favors raising rates preemptively, just in case we are missing some sign that inflation is about to spiral out of control. This latter possibility strikes me as unlikely, but I'm admittedly oversimplifying the concerns, and also haven't personally experienced high inflation.


Thursday, August 10, 2017

Macro in the Econ Major and at Liberal Arts Colleges

Last week, I attended the 13th annual Conference of Macroeconomists from Liberal Arts Colleges, hosted this year by Davidson College. I also attended the conference two years ago at Union College. I can't recommend this conference strongly enough!

The conference is a response to the increasing expectation of high quality research at many liberal arts colleges. Many of us are the only macroeconomist at our college, and can't regularly attend macro seminars, so the conference is a much-needed opportunity to receive feedback on work in progress. (The paper I presented last time just came out in the Journal of Monetary Economics!)
This time, I presented "Inflation Expectations and the Price at the Pump" and discussed Erin Wolcott's paper, "Impact of Foreign Official Purchases of U.S.Treasuries on the Yield Curve."

There was a wide range of interesting work. For example, Gina Pieters presented “Bitcoin Reveals Unofficial Exchange Rates and Detects Capital Controls.” M. Saif Mehkari's work on “Repatriation Taxes” is highly relevant to today's policy discussions. Most of the presenters and attendees were junior faculty members, but three more senior scholars held a panel discussion at dinner. Next year, the conference will be held at Wake Forest.

I also attended a session on "Macro in the Econ Major" led by PJ Glandon. A link to his slides is here. One slide presented the image below, prompting an interesting discussion about whether and how we should tailor what is taught in macro courses to our perception of the students' interests and career goals.





Monday, August 7, 2017

Labor Market Conditions Index Discontinued

A few years ago, I blogged about the Fed's new Labor Market Conditions Index (LMCI). The index attempts to summarize the state of the labor market using a statistical technique that captures the primary common variation from 19 labor market indicators. I was skeptical about the usefulness of the LMCI for a few reasons. And as it turns out, the LMCI is now discontinued as of August 3.

The discontinuation is newsworthy because the LMCI was cited in policy discussions at the Fed, even by Janet Yellen. The index became high-profile enough that I was even interviewed about it on NPR's Marketplace.

One issue that I noted with the index in my blog was the following:
A minor quibble with the index is its inclusion of wages in the list of indicators. This introduces endogeneity that makes it unsuitable for use in Phillips Curve-type estimations of the relationship between labor market conditions and wages or inflation. In other words, we can't attempt to estimate how wages depend on labor market tightness if our measure of labor market tightness already depends on wages by construction.
This corresponds to one reason that is provided for the discontinuation of the index: "including average hourly earnings as an indicator did not provide a meaningful link between labor market conditions and wage growth."

The other reasons provided for discontinuation are that "model estimates turned out to be more sensitive to the detrending procedure than we had expected" and "the measurement of some indicators in recent years has changed in ways that significantly degraded their signal content."

I also noted in my blog post and on NPR that the index is almost perfectly correlated with the unemployment rate, meaning it provides very little additional information about labor market conditions. (Or interpreted differently, meaning that the unemployment rate provides a lot of information about labor market conditions.) The development of the LMCI was part of a worthy effort to develop alternative informative measures of labor market conditions that can help policymakers gauge where we are relative to full employment and predict what is likely to happen to prices and wages. So since resources and attention are limited, I think it is wise that they can be directed toward developing and evaluating other measures.