A neat application of Robustness

Robustness is a fledgling literature in Macro. The primary concern of robust analysis is that we don’t know the exact model of the economy and this model uncertainty has policy implications. Also, the math is neat.

Uncertainty about what the correct model is causes optimal policy to give heavy weight to worse-case scenarios.

Ellison and Sargent found a pretty neat application of robustness. The Fed staff are a bunch of academics who believe they know the true model of the economy. They use the “true” model to make forecasts and those forecasts are usually really good. The FOMC is the policy making branch of the Fed. They take the staff’s forecasts, produce their own forecasts and then set policy. It turns out the FOMC’s forecasts are worse than the staff’s. Those dumb policy makers, right?

Wrong. It turns out that because the FOMC is uncertain about the true model of the economy, they won’t take the staff’s model as the true model. The optimal response to this uncertainty would lead them to worry more about worse-cases. As Ellison and Sargent say, the FOMC “can be bad forecasters and good policymakers”.

It is profound and earth shaking…

… every time I run into a statement like this: “But all interesting models involve unrealistic simplifications, which is why they must be tested against data.”

We can know things a priori, but we can’t know how important those things are a priori. A model can tell us how a particular mechanism operates, but it can’t tell us how important that mechanism is. Effect size can only be determined by looking at the data.

I’m sure there’s something wrong with my brain that can’t internalize this fact. I just read Landsburg’s Big Questions and this was his major theme. (BTW, I liked the book and here’s a review at /.)

Mike in The Nation

“Let’s look at several of the problems that happened over the past few years in the financial sector, and see how legislative efforts have attempted to address them. (Spoiler alert: not very well.)”

Mike tells this story:

Our regulator’s goal isn’t to make a system in which there are never failures but a system in which failures are cleaned up in an orderly and nondisruptive fashion. Like an elaborate game of Jenga, even removing the smallest piece can collapse the entire structure, and regulators need to be able to remove any piece without having the entire real economy collapse.

This is a great story. Can it be rationalized? What objective function of regulators would lead them to aim to prevent “disruption”? A disruption now and again might be good by standard measures of welfare. Does Mike have a public choice model in mind? Do bailouts improve the chances of re-election?

Is the system as precarious as Mike suggests? Bailouts are sold to the public using a counterfactual that is rarely, if ever, observed. Namely, if the bailed out institutions were allowed to fail, it would have produced an undesirable level of systematic risk. When have failed banks caused systematic risk? The Great Depression had bank runs which caused a bad situation to get worse. But bank runs weren’t, by far, leading the causal chain. You need deflationary expectations, no deposit insurance and no branch banking to get those sorts of bank runs.

Besides, is the recent banking crisis evidence of the system’s precariousness or evidence against it? A long time transpired between banking crises in the US.

UPDATE: What is systemic risk?

Experimental science isn’t “hard”

Angus Deaton on the “project evaluation” craze:

Randomized controlled trials cannot automatically trump other evidence, they do not occupy any special place in some hierarchy of evidence, nor does it make sense to refer to them as “hard” while other methods are “soft”. These rhetorical devices are just that; a metaphor is not an argument… thirty years of project evaluation in sociology, education and criminology was largely unsuccessful because it focused on whether projects work instead of on why they work.


The wholesale abandonment in American graduate schools of price theory in favor of infinite horizon intertemporal optimization and game theory has not been a favorable development for young empiricists. Empiricists and theorists seem further apart now than at any period in the last quarter century. Yet reintegration is hardly an option because without it there is no chance of long term scientific progress.

and after listing a number of papers that he thinks have a good mix of theory and data, he says:

In all of this work, the project, when it exists at all, is the embodiment of the theory that is being tested and refined, not the object of evaluation in its own right, and the field experiments are a bridge between the laboratory and the analysis of “natural” data.

Science is about finding underlying mechanisms. Its not about testing hypotheses and

[H]eterogeneity is not a technical problem, but a symptom of something deeper, which is the failure to specify causal models of the processes we are examining. This is the methodological message of this lecture, that technique is never a substitute for the business of doing economics.

I use the “project evaluation” and “experiment” rhetoric in one of my papers. I might have to rethink the organization of that paper…

Thar be no data here!

I confess. I don’t get Gordon’s critique of modern macro.

Questions for Prof. Gordon:

  • If you think other things should be modeled, like volatile investment, why don’t you add them to the model?
  • Why should we scrap a modelling technique because you don’t think the right things were modeled? We don’t throw out hammers because they haven’t built the tallest building.
  • Did you know that DSGE models were the battle field for a discussion on how forward/backward inflation expectations are?
  • Did you know Cambell and Mankiw’s dumb agents have been incorporated into DSGE models? Also, many of the other “missing” features of DSGE you mention have actually been incorporated into that framework.
  • Shouldn’t we reject models because they can’t reproduce important features of the data, NOT because they don’t tell satisfying stories?
  • What features of the data do ALL models written in the DSGE framework fail to replicate?

He notes “much of Keynesian economics and what I call here “1978‐era macroeconomics” was designed to explain the set of impulses and propagation mechanisms that created and amplified the Great Depression”. I don’t get why we should develop a science around explaining one (or two) data points.

The bathwater

Because I stopped reading it at about the third paragraph where he starts listing all the things that macroeconomists do but claims they don’t do (e.g. financial channels, model robustness, unemployment), can someone explain to me the argument Krugman makes?

From Sumner’s critique it sounds like Krugman wants to go back to old school Keynesian modeling. Cool. If we can test those models and they have more to say than our current models, I’m down. Anyway, I gather from Sumner’s post that Krugman says we should make this move because modern macroeconomics has nothing to say about about depressions and deflations and the zero lower bound.

How does he get from the failure of models to understand once in three generation events to the failure of models in between those events? I mean couldn’t we have two theories? One for the 95% of the time where the lower bound doesn’t hold and another for when it does?

I’m probably wrong because I can’t make heads or tails out of the General Theory, but confusing this point — thinking Keynes’ policy prescriptions for getting out of the Depression applied outside moments of depression — led to the failure of post-war macroeconomics. It lead to terrible monetary policy and the mistaken idea that Feds could fine-tune the economy.

Macroeconomic forecasting

You have four options:

  1. Simple statistics (i.e. use lagged values to predict future values)
  2. Complex statistics (e.g. VARs)
  3. Model the economy and get forecasts from the model
  4. Use the average from lots of models (e.g. ask the experts and take an average)

Surprisingly, (1) almost always beats (2). If you wanna do (1), it is pretty straight forward to do in Excel’s analysis toolkit.

Some models, like large DSGEs, do better than others, like old-school macroeconometric models. This Fed working paper compares the forecasting methods used by the Fed. It finds that, at least for real variables like GDP, a DSGE model does better than staff forecasts (method 4), does better than an old fashion ad-hoc model and it does better than sophisticated multi-variate statistical methods.

That said the DSGE model doesn’t do much better than simple statistics (method 1). This implies, of course, simple statistical methods forecast better than the Fed staff. In other words, (1) weakly dominates the other three “more sophisticated” forecasting methods. This line from the paper kills me, “[A] comparison to existing methods at the Federal Reserve [i.e. staff forecasts and the macroeconometric models] is more policy relevant than a comparison to AR and VAR forecasts [i.e. the simple and more sophisticated statistical methods, respectively], in part because Federal Reserve forecasts have not placed much weight on projections from these types of models.” Even though they’re no better at forecasting than simple statistical techniques, experts are relied on exclusively.

In defense of DSGE models, though, even if they don’t forecast better than simple statistical models, because they tell an economic story, they’re more policy relevant.

The Making of an Economist

So if you like catching people in gotcha moments… here’s a post of mine where I urge macroeconomists to discover non-rationalizing models. Ahh, so young and new to the world!

The difference between the 2007 version of Will Ambrosini and the new and improved version is the experience of attending dozens of macro seminars. Its hard enough to get your head around this stuff when everyone is on the same page as far as modelling assumptions. If you opened up the flood gates, the already limited “real” communication going on in the macro research community would quickly go to zero.


There’s been a couple criticism floating about regarding how macroeconomics do their thing. Here’s Matt Yglesias wondering about Levitt wondering about microfoundations. And here’s Ezra Klein turning , unwittingly I think, Cowen’s newest paper into an ideological flamewar about… rational expectations… of all things.

These criticisms are kinda funny. They’re a bit like going after the President for the order he inserts his legs into his pants in the morning. As in… who cares how macro folks get they’re job done, as long as they get their job done. Yeah, I get it. Macro failed to predict the crisis and there was too much hubris and blah blah blah, but I don’t see how dredging through the minutiae of day to day macro research is going to fix those problems. If you think we should be seers, say so. Let us decide the shape and color of our crystal balls.

But I’ll defend rational expectations and microfoundations anyway. These — like Occam’s razor does in the rest of science — bring discipline to macroeconomics. They’re a common language and they make progress in research more transparent.

Rational expectations is just an assumption about the psychology of agents in the model. By insisting on rational expectations, we get a common set of psychological assumptions across models. This is good for two reasons. First, economists know about economics and we don’t know much about psychology (or social psychology or whatever the appropriate level of abstraction). Given this, we choose to fight on the margin we know about.

Second, if every macro researcher had his own psychological assumptions, we wouldn’t know if when two models conflicted in their implications it was because of those different assumptions or because of the other features of the models. By limiting the set of possible assumptions, we limit the set of theories to more manageable size. This makes the game much easier for us cognitively constrained macroeconomists to play. Just imagine what chess would be like to play (or watch) if each player was allowed to make his own rules about how the various pieces could move.

But psychological assumptions only make sense in the context of microfounded modelling. Models without microfoundations aren’t required to have agents and so psychology doesn’t even have to play a role. Again, this discipline limits the size of the allowable macro theories to a set for which its even possible to have a unified vocabulary for talking about those theories. It takes years for a PhD student to get her head around this vocabulary and many students never master it (I’m certainly not even close). So all of you calling to expand the set of possible theories, please think of the children!

(Notice I haven’t even mentioned the Lucas Critique. It was an important impetus for the introduction of microfoundations in the 60’s and 70’s, but they’ve taken a life of their own since then. And the demonstration of the unstable nature of the Phillips curve is an important milestone in the history of macroeconomic thought, but microfoundations live on because they continue to provide value as a discipling mechanism.)

The second function of these disciplining devices is they provide a way to mark progress in the field. If you can build a microfounded, rational expectations model that rationalizes seemingly irrational behavior or displays market failure, then, well, you’ve really done something. Presumably, by building such a model, you’ve supplanted an ad-hoc theory built on some sort of irrationality. Explainations based on irrationality are, inevitably, just-so stories.

“Why did we have a crisis? Well, subprime lenders/borrowers were stupid.” Doesn’t tell you much, does it.

Stories that depend on irrationality are also a sort of god-of-the-gaps. Progress in macro research, as it rationalizes more and more, makes such ad-hoc theories obsolete in the same way that progress in science makes the explanation “God willed it so” obsolete. Ad-hoc theories aren’t wrong, but because they’re less universal, they’re less satisfying.

Maybe you don’t have the same aversion to ad-hoc macro stories based on irrationality as you do to stories about the natural world that depend on the will of God. I do and, more importantly, so do most macroeconomists.