Saturday, March 28, 2015

Politicians or Technocrats: Who Splits the Cake?

In most countries, non-elected central bankers conduct monetary policy, while fiscal policy is chosen by elected representatives. It is not obvious that this arrangement is appropriate. In 1997, Alan Blinder suggested that Americans leave "too many policy decisions in the realm of politics and too few in the realm of technocracy," and that tax policy might be better left to technocrats. The bigger issue these days is whether an independent, non-elected Federal Reserve can truly be "accountable" to the public, and whether Congress should have more control over monetary policy.

The standard theoretical argument for delegating monetary policy to a non-elected bureaucrat is the time inconsistency problem. As Blinder explains, "the pain of fighting inflation (higher unemployment for a while) comes well in advance of the benefits (permanently lower inflation). So shortsighted politicians with their eyes on elections would be tempted to inflate too much." But time inconsistency problems arise in fiscal policy too. Blinder adds, "Myopia is a serious practical problem for democratic governments because politics tends to produce short time horizons -- often extending only until the next election, if not just the next public opinion poll. Politicians asked to weigh short-run costs against long-run benefits may systematically shortchange the future."

So why do we assign some types of policymaking to bureaucrats and some to elected officials? And could we do better? In a two-paper series on "Bureaucrats or Politicians?," Alberto Alesina and Guido Tabellini (2007) study the question of task allocation between bureaucrats and politicians. In their model, neither bureaucrats nor politicians are purely "benevolent;" each have different objective functions depending on how they are held accountable:
Politicians are held accountable, by voters, at election time. Top-level bureaucrats are accountable to their professional peers or to the public at large, for how they have fulfilled the goals of their organization. These different accountability mechanisms induce different incentives. Politicians are motivated by the goal of pleasing voters, and hence winning elections. Top bureaucrats are motivated by "career concerns," that is, they want to fulfill the goals of their organization because this improves their external professional prospects in the public or private sector.
The model implies that, for the purpose of maximizing social welfare, some tasks are better suited for bureaucrats and others for politicians. When the public can only imperfectly monitor effort and talent, elected politicians are preferable for tasks where effort matters more than ability. Bureaucrats are preferable for highly technical tasks, like monetary policy, regulatory policy, and public debt management. This is in line with Blinder's intuition; he argued that extremely technical judgments ought to be left to technocrats and value judgments to legislators, while recognizing that both monetary and fiscal policy involve substantial amounts of both technical and value judgments.

Alesina and Tabellini's model also helps formalize and clarify Blinder's intuition on what he calls "general vs. particular" effects. Blinder writes:
Some public policy decisions have -- or are perceived to have -- mostly general impacts, affecting most citizens in similar ways. Monetary policy, for example...is usually thought of as affecting the whole economy rather than particular groups or industries. Other public policies are more naturally thought of as particularist, conferring benefits and imposing costs on identifiable groups...When the issues are particularist, the visible hand of interest-group politics is likely to be most pernicious -- which would seem to support delegating authority to unelected experts. But these are precisely the issues that require the heaviest doses of value judgments to decide who should win and lose. Such judgments are inherently and appropriately political. It's a genuine dilemma.
Alesina and Tabellini consider a bureaucrat and an elected official each assigned a task of "splitting a cake." Depending on the nature of the cake splitting task, a bureaucrat is usually preferable; specifically, "with risk neutrality and fair bureaucrats, the latter are always strictly preferred ex ante. Risk aversion makes the bureaucrat more or less desirable ex ante depending on how easy it is to impose fair treatment of all voters in his task description." Nonetheless, politicians prefer to cut the cake themselves, because it helps them get re-elected with less effort through an incumbency advantage:
The incumbent’s redistributive policies reveal his preferences, and voters correctly expect these policies to be continued if he is reelected. As they cannot observe what the opponent would do, voters face more uncertainty if voting for the opponent...This asymmetry creates an incumbency advantage: the voters are more willing to reappoint the incumbent even if he is incompetent... The incumbency advantage also reduces equilibrium effort.
An interesting associated implication is that "it is in the interest of politicians to pretend that they are ideologically biased in favor of specific groups or policies, even if in reality they are purely opportunistic. The ideology of politicians is like their brand name: it keeps voters attached to parties and reduces uncertainty about how politicians would act once in office."

According to this theoretical model, we might be better off leaving both monetary and fiscal policy to independent bureaucratic agencies. But fiscal policy is inherently redistributive, and politicians prefer not to delegate redistributive tasks. "This might explain why delegation to independent bureaucrats is very seldom observed in fiscal policy, even if many fiscal policy decisions are technically very demanding."

Both Blinder and Alesina and Tabellini--writing in 1997 and 2007, respectively-- made the distinction that tax policy, unlike monetary policy, is redistributive or "particularist." Since then, that distinction seems much less obvious. Back in 2012, Mark Spitznagel opined in the Wall Street Journal that "The Fed is transferring immense wealth from the middle class to the most affluent, from the least privileged to the most privileged." Boston Fed President Eric Rosengren countered that "The net effect [of recent Fed policy] is substantially weighted towards people that are borrowers not lenders, towards people that are unemployed versus people that are employed." Other Fed officials and academic economists are also paying increasing attention to the redistributive implications of monetary policy.

Monetary policymakers can no longer ignore the distributional effects of monetary policy-- and neither can voters and politicians. Alesina and Tabellini's model predicts that the more that elected politicians recognize the "cake splitting" aspect of monetary policy, the more they will want to redelegate it to themselves. Expect stronger cries for "accountability." However, the redistributive nature of monetary policy, according to the model, probably strengthens the argument for leaving it to independent technocrats. The caveat is that "the result may be reversed if the bureaucrat is unfair and implements a totally arbitrary redistribution." The Fed's role in redistributing resources strengthens its case for independence if and only if it takes equity concerns seriously.

Wednesday, March 4, 2015

Federal Reserve Communication with Congress

In 2003, Ben Bernanke described a central bank's communication strategy as "regular procedures for communicating with the political authorities, the financial markets, and the general public." The fact that there are three target audiences of monetary policy communication, with three distinct sets of needs and concerns, is an important point. Alan Blinder and coauthors note that most of the research on monetary policy communication has focused on communication with financial markets. In my working paper "Fed Speak on Main Street," I focus on communication with the general public. But with the recent attention on Congressional calls to audit or reform the Fed, communication with the third audience, political authorities, also merits attention.

Bernanke added that "a central bank's communications strategy, closely linked to the idea of transparency, has many aspects and many motivations." One such motivation is accountabilityFederal Reserve communication with political authorities is contentious because of the tension that can arise between accountability and freedom from political pressure. As Laurence Meyer explained in 2000:
Even a limited degree of independence, taken literally, could be viewed as inconsistent with democratic ideals and, in addition, might leave the central bank without appropriate incentives to carry out its responsibilities. Therefore, independence has to be balanced with accountability--accountability of the central bank to the public and, specifically, to their elected representatives. 
It is important to appreciate, however, that steps to encourage accountability also offer opportunities for political pressure. The history of the Federal Reserve's relationship to the rest of government is one marked by efforts by the rest of government both to foster central bank independence and to exert political pressure on monetary policy.
It is worthwhile to take a step back and ask what is meant by accountability. Colloquially and in academic literature, the term accountability has become "an ever-expanding concept." Accountability does not mean that the Fed needs to please every member of Congress, or even some of them, all the time. If it did, there would be no point in having an independent central bank! So what does accountability mean?  A useful synonym is answerability. The Fed's accountability to Congress means the Fed must answer to Congress-- this requires, of course, that Congress ask something of the Fed. David Wessel explains that this can be a problem:
Congress is having a hard time fulfilling its responsibilities to hold the Fed accountable. Too few members of Congress know enough to ask good questions at hearings where the Fed chair testifies. Too many view hearings as a way to get themselves on TV or to score political points against the other party.
Accountability, in the sense of answerability, is a two-way street requiring effort on the parts of both the Fed and Congress. Recent efforts by Congress to impose "accountability" would clear Congress of the more onerous part of its task. The Federal Reserve Accountability and Transparency Act introduced in 2014 would require that the Fed adopt a rules-based policy. The legislation states that "Upon determining that plans…cannot or should not be achieved, the Federal Open Market Committee shall submit an explanation for that determination and an updated version of the Directive Policy Rule.”

In 1976, Senator Hubert Humphrey made a similar proposal: the president would submit recommendations for monetary policy, and the Federal Reserve Board of Governors would have to explain any proposed deviation within fifteen days. This proposal did not pass, but other legislation in the late 1970s did change the Federal Reserve's objectives and standards for accountability. Prompted by high inflation, the Federal Reserve Reform Act of 1977 made price stability an explicit policy goal. Representative Augustus Hawkins and Senator Humphrey introduced the Full Employment and Balanced Growth Act of 1978, also known as the Humphrey-Hawkins Act, which added a full employment goal and obligated the Fed Chair to make biannual reports to Congress. It was signed into law by President Jimmy Carter on October 27, 1978.

The Humphrey-Hawkins Act, though initially resisted by FOMC members, did improve the Fed's accountability or answerability to Congress. The requirement of twice-yearly reports to Congress literally required the Fed Chair to answer Congress' questions (though likely, for a time, in "Fed Speak.") The outlining of the Fed's policy goals defined the scope of what Congress should ask about. In terms of the Fed's communication strategy with Congress, its format, broadly, is question-and-answer. Its content is the Federal Reserve's mandates. Its tone--clear or obfuscatory, helpful or hostile--has varied over time and across Fed officials and members of Congress. 

Since 1978, changes to the communication strategy, such as the announcement of a 2% long-run goal for PCE inflation in 2012, have attempted to facilitate the Fed's answerability to Congress. The proposal to require that the Fed follow a rule-based policy goes beyond the requirements of accountability. The Fed must be accountable for the outcomes of its policy, but that does not mean restricting the flexibility of its actions. Unusual or extreme economic conditions require discretion on the part of monetary policymakers, which they must be prepared to explain as clearly as possible. 

Janet Yellen remarked in 2013 that "By the eve of the recent financial crisis, it was established that the FOMC could not simply rely on its record of systematic behavior as a substitute for communication--especially under unusual circumstances, for which history had little to teach" [emphasis added]. Imposing systematic behavior in the form of rules-based policy is an even poorer substitute. As monetary begins to normalize, Congress' role in monetary policy is to question the Fed, not to bully it.

Wednesday, February 18, 2015

My Job Market Experience

Many of you have been following my blog for a large share of my time in graduate school. I will graduate from Berkeley this May. It is with great pleasure and gratitude that I announce what comes next for me: I have accepted a position as Assistant Professor of Economics at Haverford College.

Haverford is a liberal arts college just a few miles from Philadelphia. One unique aspect of Haverford is that all of the 1,187 undergraduate students complete a senior thesis. I will really enjoy introducing students to the research process and hope to be inspired by their creativity. Haverford is part of a consortium with Bryn Mawr and Swarthmore. It is also in close proximity to the Philadelphia Fed, the University of Pennsylvania, and Villanova. Thus, there is a fantastic network of researchers and scholars that I very much look forward to joining.
The duck pond on Haverford's arboretum campus

The job market process has been an unforgettable experience. I met so many amazing people who impressed me with their energy and dedication to teaching, research, and policy. If the people I met are anything like a representative sample of professors and policymakers, our students and our country are in good hands. I wish I could have all of you as colleagues and hope for many opportunities to see you in the future. 

Wednesday, February 4, 2015

Let's Not Give Up on Mobility

Gregory Clark, an economic historian at UC Davis, writes that "Social mobility barely exists but let’s not give up on equality." His research using rare surnames to track social mobility over several centuries finds that social mobility in England is still just as low in today's "modern noisy meritocracy" as it was in pre-industrial times. He concludes that "Lineage is destiny. At birth, most of your social outcome is predictable from your family history." He emphasizes that this is true not only in the UK, but also in Sweden, China, and the U.S.

The subtitle of Clark's article says that "Too much faith is placed in the idea of movement between the classes. Still, there are other ways to tackle the unfairness of society." He elaborates:
"Given that social mobility rates are immutable, it is better to reduce the gains people make from having high status, and the penalties from low status. The Swedish model of compressed inequality is a realistic option, the American dream of rapid mobility an illusion...While mobility seems governed by a social physics that defies easy intervention, the magnitude of social inequalities varies considerably across societies, and can be strongly influenced by social institutions. We cannot change the winners in the social lottery, but we can change the value of their prizes."
I agree that meritocracy alone does not guarantee high mobility, and therefore that making a society more meritocratic is not the silver bullet solution to inequality. But I wouldn't go so far as to say that social mobility rates are immutable. First, just because social mobility has not improved in the past doesn't mean that it's incapable of improving in the future. Second, the fact that social mobility varies across countries and even within countries implies that it should be possible to increase mobility.

Within the United States, there is substantial geographic variation in social mobility. Raj Chetty, Nathaniel Hendren, Patrick Kline, and Emmanuel Saez use administrative records on the incomes of 40 million children and their parents to study intergenerational mobility in 741 local areas. It turns out that the American dream is more viable in some places than in others. Chetty summarizes:
Looking at the probability that a child who grew up in a bottom-quintile income family reaches the top-quintile of the income distribution across areas of the U.S., we find substantial variation across regions. In some parts of the U.S. – such as the Southeast and the Rust Belt – children in the bottom quintile have less than a 5% chance of reaching the top quintile. In other areas, such as the Great Plains and the West Coast, children in the bottom quintile have more than a 15% chance of reaching the top quintile. 
There is substantial variation in upward mobility even among large cities that have comparable economies and demographics. Cities such as Salt Lake City and San Jose have rates of mobility comparable to Denmark and other countries with the highest rates of mobility in the world. Other cities – such as Charlotte and Milwaukee – offer children very limited prospects of escaping poverty. These cities have lower rates of mobility than any developed country for which data are currently available.
Not only does mobility vary across geographic regions, it varies in systematic ways. Chetty et al. find that proxies for the quality of the K-12 school system are positively correlated with mobility. So are social capital indices, which measure the strength of social networks and community involvement. For example, high upward mobility areas tend to have higher participation in local civic organizations and religious activity. Clark says that low social mobility is here to stay because of "strong transmission within families of the attributes that lead to social success." But certainly there are other methods of transmission, particularly in schools and communities, that could be developed or improved. 

Improving the living conditions of the poor is extremely important regardless of the level of mobility in a society. So I agree with Clark that we shouldn't give up on equality. But I think we shouldn't give up on mobility either. History tells us what has happened, not what can happen. We don't know what could happen under a sustained and ambitious effort to improve upward mobility.

Monday, January 12, 2015

Targeting from Below

Inflation targeting (IT) was widely adopted by central banks in both industrialized and emerging-market countries in the 1990s and 2000s. Typically, the objective for switching to an IT framework has been to reduce and stabilize high and volatile inflation. Studies of IT find that it has been successful in regards to this objective.

But how does IT fare when inflation is instead too low? Michael Ehrmann of the Bank of Canada addresses this question in "Targeting Inflation from Below: How do Inflation Expectations Behave?" He notes that the Bank of Japan adopted IT in an environment of undesirably-low inflation. Likewise, when the Federal Reserve announced a 2% inflation target in 2012, core inflation had been below 2% for some time. "Although designed to lower inflation and inflation expectations," Ehrmann writes, "IT is now charged with the objective to raise them, a challenge that has not yet been studied extensively."

Since inflation targeting is supposed to work by anchoring expectations near the target, Ehrmann studies the inflation expectations of professional forecasters to compare the performance of IT when inflation is persistently low, near target, and persistently high. He uses data from Consensus Economics for Australia, Canada, the euro area, France, Germany, Italy, Japan, the Netherlands, New Zealand, Norway, Spain, Sweden, Switzerland, the United Kingdom and the United States. He uses three different indicators of the extent to which inflation expectations are anchored: (1) the extent to which expectations depend on lagged inflation; (2) forecaster disagreement; and (3) the extent to which inflation expectations get revised in response to news. On all three counts, he finds that under persistently low inflation, expectations can become disanchored. That is, inflation expectations are more dependent on lagged inflation; forecasters disagree more; and inflation expectations get revised down in response to lower-than-expected inflation. He also finds that when inflation is persistently low, expectations do not get revised upward in response to higher-than-expected inflation.

These findings are important. We tend to worry about inflation expectations becoming disanchored when inflation goes too high above target or stays above target for too long. This is partly why the inflation target gets treated more like a ceiling than a symmetric target. But in the current situation in the U.S. and Europe, it may be that keeping inflation too low is weakening the anchoring of expectations. A temporary burst of above-target inflation seems unlikely to damage the anchor.


Thursday, December 11, 2014

Mixed Signals and Monetary Policy Discretion

Two recent Economic Letters from the Federal Reserve Bank of San Francisco highlight the difficulty of making monetary policy decisions when alternative measures of labor market slack and the output gap give mixed signals. In Monetary Policy when the Spyglass is Smudged, Early Elias, Helen Irvin, and √íscar Jord√† show that conventional policy rules based on the output gap and on the deviation of the unemployment rate from its natural rate generate wide-ranging policy rate prescriptions. Similarly, in Mixed Signals: Labor Markets and Monetary Policy, Canyon Bosler, Mary Daly, and Fernanda Nechio calculate the policy rate prescribed by a Taylor rule under alternative measures of labor market slack. The figure below illustrates the large divergence in alternative prescribed policy rates since the Great Recession.

Source: Bosler, Daly, and Nechio (2014), Figure 2
Uncertainty about the state of the labor market makes monetary policy more challenging and requires more discretion and judgment on the part of policymakers. What does discretion and judgment look like in practice? I think it should involve reasoning qualitatively to determine if some decisions lead to possible outcomes that are definitively worse than others. For example, here's how I would reason through the decision about whether to raise the policy rate under high uncertainty about the labor market:

Suppose it is May and the Fed is deciding whether to increase the target rate by 25 basis points. Assume inflation is still at or slightly below 2%, and the Fed would like to tighten monetary policy if and only if the "true" state of the labor market x is sufficiently high, say above some threshold X. The Fed does not observe x but has some very noisy signals about it.  They think there is about a fifty-fifty chance that x is above X, so it is not at all obvious whether tightening is appropriate. There are four possible scenarios:

  1. The Fed does not increase the target rate, and it turns out that x>X.
  2. The Fed does not increase the target rate, and it turns out that x<X.
  3. The Fed does increase the target rate, and it turns out that x>X.
  4. The Fed does increase the target rate, and it turns out that x>X.

Cases (2) and (3) are great. In case (2), the Fed did not tighten when tightening was not appropriate, and in case (3), the Fed tightened when tightening was appropriate. Cases (1) and (4) are "mistakes." In case (1), the Fed should have tightened but did not, and in case (4), the Fed should not have tightened but did. Which is worse?

If we think just about immediate or short-run impacts, case (1) might mean inflation goes higher than the Fed wants and x goes even higher above X; case (4) might mean unemployment goes higher than the Fed wants and x falls even further below X. Maybe you have an opinion on which of those short-run outcomes is worse, or maybe not. But the bigger difference between the outcomes comes when you think about the Fed's options at its subsequent meeting. In case (1), the Fed could choose how much they want to raise rates to restrain inflation. In case (4), the Fed could keep rates constant or reverse the previous meeting's rate increase.

In case (4), neither option is good. Keeping the target at 25 basis points is too restrictive. Labor market conditions were bad to begin with and keeping policy tight will make them worse. But reversing the rate increase is a non-starter. The markets expect that after the first rate increase, rates will continue on an upward trend, as in previous tightening episodes. Reversing the rate increase would cause financial market turmoil, damage credibility, and require policymakers to admit that they were wrong. Case (1) is much more attractive. I think any concern that inflation could take off and get out of control is unwarranted. In the space between two FOMC meetings, even if inflation were to rise above target, inflation expectations are not likely to rise too far. The Fed could easily restrain expectations at the next meeting by raising rates as aggressively as needed.

So going back to the four possible scenarios, (2) and (3) are good, and (4) is much worse than (1). If the Fed raises rates, scenarios (3) and (4) are about equally likely. If the Fed holds rates constant, (1) and (2) are about equally likely. Thus, holding rates constant under high uncertainty about the state of the labor market is a better option than potentially raising rates too soon.

Sunday, December 7, 2014

Most Households Expect Interest Rates to Increase by May

Two new posts on the New York Federal Reserve's Liberty Street Economics Blog describe methods of inferring interest rate expectations from interest rate futures and forwards and from surveys conducted by the Trading Desk of the New York Fed. In a post at the Atlanta Fed's macroblog, "Does Forward Guidance Reach Main Street?," economists Mike BryanBrent Meyer, and Nicholas Parker, ask, "But what do we know about Main Street’s perspective on the fed funds rate? Do they even have an opinion on the subject?"

To broach this question, they use a special question on the Business Inflation Expectations (BIE) Survey. A panel of businesses in the Sixth District were asked to assign probabilities that the federal funds rate at the end of 2015 would fall into various ranges. The figure below compares the business survey responses to the FOMC's June projection. The similarity between businesspeoples' expectations and FOMC members' expectations for the fed funds rate is taken as an indication that forward guidance on the funds rate has reached Main Street.


What about the rest of Main Street-- the non-business-owners? We don't know too much about forward guidance and the average household. I looked at the Michigan Survey of Consumers for some indication of households' interest rate expectations. One year ago, in December 2013, 61% of respondents on the Michigan Survey said they expected interest rates to rise in the next twelve months. Only a third of consumers expected rates to stay approximately the same. According to the most recently-available edition of the survey, from May 2014, 63% of consumers expect rates to rise by May 2015.

The figure below shows the percent of consumers expecting interest rates to increase in the next twelve months in each survey since 2008. I use vertical lines to indicate several key dates. In December 2008, the federal funds rate target was reduced to 0 to 0.25%, marking the start of the zero lower bound period. Nearly half of consumers in 2009 and 2010 expected rates to rise over the next year. In August 2011, Fed officials began using calendar-based forward guidance when they announced that they would keep rates near zero until at least mid-2013. Date-based forward guidance continued until December 2012. Over this period, less than 40% of consumers expected rate increases.

In December 2012, the Fed adopted the Evans Rule, announcing that the fed funds rate would remain near zero until the unemployment rate fell to 6.5%. In December 2013, the Fed announced a modest reduction in the pace of its asset purchases, emphasizing that this "tapering" did not indicate imminent rate increases. The share of consumers expecting rate increases made a large jump from 55% in June 2013 to 68% in July 2013, and has remained in the high-50s to mid-60s since then.


But since 1978, the percent of consumers expecting an increase in interest rates has tracked reasonably closely with the realized change in the federal funds rate over the next twelve months (fed funds rate in month t+12 minus fed funds rate in month t). In the figure below, the correlation coefficient is 0.26. As a back-of-the-envelop calculation, if we regress the change in the federal funds rate in twelve months on the percent of consumers expecting a rate increase, the regression coefficients indicate that when 63% of consumers expect a rate increase, that predicts a 25 basis points rise in rates in the next year.


This survey data does not tell us for sure that forward guidance has reached Main Street. The survey does not specifically refer to the federal funds rate, just to interest rates in general. And households could simply have noticed that rates have been low for a long time and expect them to increase, even without hearing the Fed's forward guidance.  In an average month, 51% of consumers expect rates to rise over the next year, with a standard deviation of 15%. So the values we're seeing lately are about a standard deviation above the historical average, but they have been higher historically. In the third and fourth quarters of 1994, after the Fed had already begun tightening interest rates, 75-80% of consumers expected further rate increases. At the start of 1994, however, only half of consumers anticipated the rate increases that would come.

In May 2004, the FOMC noted that accommodation could “be removed at a pace that is likely to be measured.” That month, 85% of consumers (a historical maximum) correctly expected rates to increase.