The bathwater

Because I stopped reading it at about the third paragraph where he starts listing all the things that macroeconomists do but claims they don’t do (e.g. financial channels, model robustness, unemployment), can someone explain to me the argument Krugman makes?

From Sumner’s critique it sounds like Krugman wants to go back to old school Keynesian modeling. Cool. If we can test those models and they have more to say than our current models, I’m down. Anyway, I gather from Sumner’s post that Krugman says we should make this move because modern macroeconomics has nothing to say about about depressions and deflations and the zero lower bound.

How does he get from the failure of models to understand once in three generation events to the failure of models in between those events? I mean couldn’t we have two theories? One for the 95% of the time where the lower bound doesn’t hold and another for when it does?

I’m probably wrong because I can’t make heads or tails out of the General Theory, but confusing this point — thinking Keynes’ policy prescriptions for getting out of the Depression applied outside moments of depression — led to the failure of post-war macroeconomics. It lead to terrible monetary policy and the mistaken idea that Feds could fine-tune the economy.

Sam, Fred and the deer in the road

A story:

Fred is driving a car down a deserted highway in the middle of the night. His friend Sam is in the passenger seat. Fred reaches down to pick up something he dropped, taking his eyes off the road ((or maybe he doesn’t know he’s driving)). When he lifts his gaze back to the road, a deer has appeared.

What should Fred do? What should Sam do?

Clearly Fred should make a measured jerk of the wheel and swerve out of the way of the deer and Sam should do nothing. If Fred doesn’t swerve out of the way, he is responsible for the resulting crash. If Sam yanks on the wheel causing the car to go out of control and slam into a guard rail, he’s responsible for making the crash much worse.

The deer in the road is ultimately the cause for the crash, but as a force of nature he can’t be blamed for it. It is the actions or inactions of the people in the car that determine culpability. Because their actions determine whether or not the crash occurs and its severity, their culpability may not limited to not preventing the crash but for making it worse.

Friedman and Swartz found that Fred was at fault for the crash because he didn’t swerve when he should have. Ohanian has conjectured and has found some support in the data for the idea that Sam is at fault for making the crash worse because he jerked on the wheel.

But in the historical example, didn’t Sam jerk the wheel again after the car hit the guard rail? Yes, but jerks on wheels can, by luck, right out-of-control cars. Luckily for Sam, Eggertsson has found this was the case in the historical example. In that case, a jerk on wheel in the right direction happened to be productive.

Notice this doesn’t mean wildly jerking the wheel and sending cars out of control is a good idea. Also, the lucky productivity of the second jerk on the wheel doesn’t mean the first wasn’t bad.

“The Fed” now has two meanings

In a comment someone really, really smart (and handsome) said:

Believers in fiscal policy should be thinking of ways to fix the administration of it. Maybe an independent fiscal authority with a precisely defined policy instrument (e.g. stimulus checks and a consumption tax) and mandate (e.g. keep consumptions spending smooth)?

Eric Leeper has a new paper (pdf):

In this lecture, I argue that there are remarkable parallels between how monetary and fiscal policies operate on the macro economy and that these parallels are sufficient to lead us to think about transforming fiscal policy and fiscal institutions as many countries have transformed monetary policy and monetary institutions. Making fiscal transparency comparable to monetary transparency requires fiscal authorities to discuss future possible fiscal policies explicitly. Enhanced fiscal transparency can help anchor expectations of fiscal policy and make fiscal actions more predictable and effective. As advanced economies move into a prolonged period of heightened fiscal activity, anchoring fiscal expectations will become an increasingly important aspect of macroeconomic policy.

The paper is human friendly and is best read as a history of monetary economic thought. Leeper underscores that expectations are key to the success of fiscal policy (like they are to monetary policy). For example, if the public expects deficit spending to be followed by increased taxes in the future, GDP can contract today (i.e. negative multipliers). He suggests that, like in monetary policy, transparency and commitment would make for better fiscal policy, but he points out:

For many reasons it is not an easy task to enhance fiscal transparency by providing information that helps to anchor expectations of future fiscal choices. The two most prominent reasons offered for the difficulties are:
(1) Fiscal policy is complex;
(2) Current governments cannot commit future governments.
These reasons are true. But they also underscore why enhanced fiscal transparency is potentially so valuable.

The best line in the paper:

Further complicating the fiscal decision process is a stunning fact: a clearly defined and attainable set of objectives for fiscal policy is rarely specified. Many fiscal authorities lay out their objectives on their web pages. Sustainable fiscal policy is the most common goal. But achieving sustainable policy is equivalent to aiming to avoid government insolvency. If a company’s CEO were to announce to shareholders that the company’s overarching goal is to avoid bankruptcy, the CEO would soon be replaced. Surely people can ask for more than minimal competence from their public officials.

He has a few suggestions for increasing transparency. First, have better projections of fiscal policy and its impact. “Fiscal authorities could produce more sophisticated projections, grounded in economic
reasoning, that characterize outcomes that, as a matter of economic logic, could occur.” Second, there could be a fiscal Fed that sets deficits. Third, agree on some basic fiscal policy objectives that can be easily measured. Fourth, define some fiscal policy rules that meet those objectives. Lastly, establish credibility.

And my vote for the understatement of the year: “But fiscal decisions are only a small subset of the votes that legislators place, so fiscal votes can easily get lost in the morass of electoral politics.” I wonder, though, if it was ever thought that monetary authorities would have as much credibility, transparency and independence as they do today.

He ends on the ARRA, “I shall end with an egregious example of non-transparent fiscal policy: the recent $787 billion American fiscal stimulus plan.”

Gauti B. Eggertsson watch

I like this guy. This paper suggests deficit spending is productive at the zero lower bound because by increasing the deficit it increases expectations of inflation. Its important, however, for the central bank to coordinate its policy with the fiscal authorities to make this result happen. He very nicely shows that rules-constrained central banks, while not having the inflation bias of discretionary central banks, have deflation bias at the zero lower bound.

Here’s his Palgrave definition of liquidity trap. He argues if the Fed follows a Taylor rule, the liquidity trap really is a trap. Also,

if a central bank is discretionary, that is, unable to commit to future policy, and minimizes a standard loss function that depends on inflation and the output gap, it will also be unable to increase inflationary expectations at the zero bound, because it will always have an incentive to renege on an inflation promise or extended ‘quantitative easing’ in order to achieve low ex post inflation. This deflation bias has the same implication as the previous two irrelevance propositions, namely, that the public will expect any increase in the monetary base to be reversed as soon as deflationary pressures subside.

This should make Sumner happy:

There is a large literature on the different policy rules that minimize the distortions associated with deflationary shocks… Eggertsson and Woodford (2003) and Wolman (2005)… show that, if the government follows a form of price level targeting, the optimal commitment solution can be closely or even completely replicated, depending on the sophistication of the targeting regime. Under the proposed policy rule the central bank commits to keep the interest rate at zero until a particular price level is hit, which happens well after the deflationary shocks have subsided.

This should make Sumner unhappy:

Perhaps the most straightforward way to make a reflation credible is for the government to issue debt, for example by deficit spending. It is well known in the literature that government debt creates an inflationary incentive (see, for example, Calvo, 1978). Suppose the government promises future inflation and in addition prints one dollar of debt. If the government later reneges on its promised inflation, the real value of this one dollar of debt will increase by the same amount. Then the government will need to raise taxes to compensate for the increase in the real debt. To the extent that taxation is costly, it will no longer be in the interest of the government to renege on its promises to inflate the price level, even after deflationary pressures have subsided in the example above.

In other words, the best way to fight deflation, to increase expectations of inflation, is deficit spending.

The ARRA was a fight over distribution. As an economist, I have no dog in that fight. As a libertarian, well…

How often do firms change prices in a liquidity trap?

I dunno, but I’m reading Christiano, Eichenbaum, and Rebelo and they seem pretty confident they know. They say, “[w]e only consider values of κ for which the zero bound is binding, so we display results for 0.013 ≤ κ ≤ 0.038.” Kappa is the degree of price stickiness and assuming I’ve done my sums right, their analysis depends on the assumption that firms update prices on average once every 5.88 to 6.90 periods. That seems like a very precise calibration range for a variable we’re not very certain about…

What do Christiano, Eichenbaum, and Rebelo mean when they say, “[w]e only consider values of κ for which the zero bound is binding, so we display results for 0.013 ≤ κ ≤ 0.038.”?

History of all of modern macro (ignoring the last 30 years)

Prof. Delong has a nice summary of the pre-RBC (aka “purist”) macro literature of the 70’s:

The underlying argument went something like this: (i) There is no sense talking about anything like “involuntary unemployment”: markets clear, and at all times people work as much as they want to work and firms produce as much as they want to produce. (ii) Workers work more relative to trend when they think their real wages are high, and firms produce more relative to trend when they think real prices for their products are high. (iii) Workers work less relative to trend when they think their real wages are low, and firms produce less relative to trend when they think real prices for their products are low. (iv) Workers and firms have rational expectations, so if they expect government fiscal or monetary policies to expand (or contract) nominal demand they will expect nominal wages or prices to rise (or fall) accordingly. (v) Thus if a predicted government-driven expansion (or contraction) raises (or lowers) nominal demand and thus their nominal wages or prices, they will understand that their real wages or prices have remained unchanged–and hence will not work more or less, and will not produce more or less. (vi) Only if nominal wages or prices rise (or fall) in an unexpected fashion will workers or firms get confused, and work and produce more (or less) than the trend. (vii) But with rational expectations the only cases in which government policy produces unexpected rises or falls in wages and prices is if the government policy is random. (viii) In which case its effects are random. (ix) And so government policies–not just fiscal but monetary policies too–cannot be stabilizing but only destabilizing. (x) Hence the best of all policies sets a predictable and constant rule for monetary and fiscal policy and does not deviate from it no matter what.

Regarding (i), (ii) and (ii), there’s a growing literature in macro on unemployment. Robert Hall has done a lot of work on this (e.g. note that he uses some psychologist-pleasing behavioral assumptions in wage bargaining). This macro research has done a very healthy thing; it inspired research questions for micro folks (e.g.). The back and forth between macro and micro people on this issue (and the state-based vs. time-based pricing stuff) suggests, at least to me, that macro is far from insular. In fact, I’d say the field is learning real things about the real world. We’re learning practical things that pragmatist might find interesting if they paid attention.

Also, these employment search models — ones where employment doesn’t hinge solely on the labor/leasure trade-off (i.e. doesn’t require unemployment to be equivalent to vacation time) — have recently found themselves studied in the DSGE framework (e.g.) where they’ll quickly find applications in policy making.

All modern macro papers have sticky prices and wages. Recently maligned Nobel prize laureate Robert Lucas has found evidence consistent with sticky prices. In other words, (iv) and (v) may have been the purist view in the 70’s but they don’t reflect the current state of macro.

As to the consequent points (vii to x), modern macro doesn’t deny discretionary policy can have short-term effects, but once the public learns policy makers are using their discretion, they will expect it, policy becomes incredible and it becomes inefficient or ineffective. Policy makers will say one thing but the public will suspect they’re lying. Policy makers will promise to be good and do what’s right in the long run, but the public will suspect they’ll do what’s most expedient. There’s a good discussion of this in the introduction of Micheal Woodford’s textbook.

And have you ever noticed how deficits aren’t counter-cyclical ((Changes in the deficit and GDP growth have had opposite signs only 27% of the time since Eisenhower, if I have my sums right.))? Fiscal policy looks awfully discretionary.

You might interpret the actions of the Fed last year as discretionary, but that entirely depends on what rule (or which rule-making framework) the public thinks the Fed was using. If the public thought the Fed was targeting inflation, then the Fed’s actions may not have looked discretionary at all. If it thought the Fed was doing something because something had to be done, then, yeah, the Fed’s actions looked discretionary. It may be too that last Fall was so unprecidented any action on the part of the Fed would look like it was acting with discretion. On the other hand, if the Fed was seen as acting in its prescribed role as lender of last resort and so it wasn’t acting expediently, its policy credibility may sill be intact. Its hard to tell, but a spontaneous dirty fight between economists — and their political slaves ((Yes, my tongue is firmly planted in cheek)) — about the relative merits of fiscal and monetary policy during that time didn’t do much for the credibility of policy making.

In any case, Professor Delong says, “So when the financial crisis began in the summer of 2007, we Pragmatists largely ignored the Purists, for they seemed to have nothing to say.” There’s been a lot of progress in macro since the Puritan 70’s. The professor has seemed eager to attack finance people and he debated a theorist here in Davis, but as is evidenced by this post he has had limited engagement with modern policy oriented macroeconomics.

And this is all an attempt to change the subject. For standard counter-cyclical policy when given a choice between effective monetary policy and effective fiscal policy, monetary policy wins hands down (and I’ll go out on a limb and say Prof. Delong’s hand would be down too). Monetary policy is faster and, when you consider policy expectations, more efficient. The current debate is over whether or not monetary policy can be effective right now. Are we in a liquidity trap? Is the Fed pushing on strings? Theory says no; monetary policy can be effective when short-term nominal interest rates are zero. What does the evidence say? Well, there hasn’t been very many instances where those interest rates were zero, but monetary forces got us out of the Depression. And so far we’ve managed to stay out of deflationary spiral, so chalk one up for monetary policy at the zero lower bound.

Macroeconomic forecasting

You have four options:

  1. Simple statistics (i.e. use lagged values to predict future values)
  2. Complex statistics (e.g. VARs)
  3. Model the economy and get forecasts from the model
  4. Use the average from lots of models (e.g. ask the experts and take an average)

Surprisingly, (1) almost always beats (2). If you wanna do (1), it is pretty straight forward to do in Excel’s analysis toolkit.

Some models, like large DSGEs, do better than others, like old-school macroeconometric models. This Fed working paper compares the forecasting methods used by the Fed. It finds that, at least for real variables like GDP, a DSGE model does better than staff forecasts (method 4), does better than an old fashion ad-hoc model and it does better than sophisticated multi-variate statistical methods.

That said the DSGE model doesn’t do much better than simple statistics (method 1). This implies, of course, simple statistical methods forecast better than the Fed staff. In other words, (1) weakly dominates the other three “more sophisticated” forecasting methods. This line from the paper kills me, “[A] comparison to existing methods at the Federal Reserve [i.e. staff forecasts and the macroeconometric models] is more policy relevant than a comparison to AR and VAR forecasts [i.e. the simple and more sophisticated statistical methods, respectively], in part because Federal Reserve forecasts have not placed much weight on projections from these types of models.” Even though they’re no better at forecasting than simple statistical techniques, experts are relied on exclusively.

In defense of DSGE models, though, even if they don’t forecast better than simple statistical models, because they tell an economic story, they’re more policy relevant.

Was Newtonian Physics refuted?

John Quiggin is loving the attention he’s getting for his series of posts labeled “refuted economic dogmas”. These refutations have left me unconvinced. Often, when I know a little about the “dogma” being refuted, I find myself suspecting Prof. Quiggin doesn’t really appreciate the nuances. For example, in his refutation of central bank independence, he conflated independence with inflation targeting and independence with the lack of cooperation between governments and central banks.

Another example: Quiggin argues New Keynesian Macroeconomics has been refuted by recent events. He defines New Keynesian Macroeconomics as the analysis of small deviations from Adam Smith’s ideal economy. This is not New Keynesian Macro. NK macro models a number of serious deviations from the ideal: imperfect goods markets, imperfect labor markets, sticky prices and sticky wages are standard features of these models. Its interesting that these serious deviations from Smithian ideal result, by construction, in a place being found for policy to improve macro outcomes but despite these deviations, the effect of policy is minor.

The problem of these models is that they can only analyze small deviations from these Keynesian economies. Shocks to the system are assumed to be small and “small” isn’t defined. It could be, in theory at least, the kinds of shocks experienced in the actual economy are bigger than the small shocks analyzed in the model. This would mean the results from the model don’t tell us much about the real economy.

In particular, it could be the shocks experienced to the economy in the current recession are too big to correspond to the implications of NK models. I agree, then, that this feature of these models is troubling. However, we can’t know if the shocks are too big. To use these models, we assume they’re not.

The standard rebuttal to this argument against using NK models, then, is that without an alternative framework of analysis — one that does at least as good a job of replicating key facts about the economy — we can’t do better than NK models. Yes, they’re not a perfect fit to reality, but they’re the best we’ve got. And until we have an alternative, this rebuttal seems persuasive to me.

BTW, Delong says that because NK models don’t tell us everything, they tell us nothing. This argument is just silly. Really very silly.

“If the Fed is God, …”

Many (most… all…) claims about money policy contain some predicate, explicit or not, that the money authority actually can have real effects (e.g. Hamilton, “If you think that the Federal Reserve is responsible for more than 15-20% of the variation in the CPI, …”). This seems to be a pretty important assumption. Why don’t I know if its been tested or not?

I guess one way its tested is to produce models that don’t have money policy but explain big chunks of the data. Is this what RBC was all about? Does a current strand of the literature continue this line of research?

The Great Depression was a test of this assumption, but I want more than narrative evidence (i.e. more than one data point).