Macroeconomic forecasting

You have four options:

  1. Simple statistics (i.e. use lagged values to predict future values)
  2. Complex statistics (e.g. VARs)
  3. Model the economy and get forecasts from the model
  4. Use the average from lots of models (e.g. ask the experts and take an average)

Surprisingly, (1) almost always beats (2). If you wanna do (1), it is pretty straight forward to do in Excel’s analysis toolkit.

Some models, like large DSGEs, do better than others, like old-school macroeconometric models. This Fed working paper compares the forecasting methods used by the Fed. It finds that, at least for real variables like GDP, a DSGE model does better than staff forecasts (method 4), does better than an old fashion ad-hoc model and it does better than sophisticated multi-variate statistical methods.

That said the DSGE model doesn’t do much better than simple statistics (method 1). This implies, of course, simple statistical methods forecast better than the Fed staff. In other words, (1) weakly dominates the other three “more sophisticated” forecasting methods. This line from the paper kills me, “[A] comparison to existing methods at the Federal Reserve [i.e. staff forecasts and the macroeconometric models] is more policy relevant than a comparison to AR and VAR forecasts [i.e. the simple and more sophisticated statistical methods, respectively], in part because Federal Reserve forecasts have not placed much weight on projections from these types of models.” Even though they’re no better at forecasting than simple statistical techniques, experts are relied on exclusively.

In defense of DSGE models, though, even if they don’t forecast better than simple statistical models, because they tell an economic story, they’re more policy relevant.

9 thoughts on “Macroeconomic forecasting”

  1. Also, I guess it’s worth pointing out that for “normal times” pretty much everything is good enough, maybe subjectively speaking, and that for “unusual times”, like present times, they all suck donkey ass.

    The promise of deep, structural, microfounded modeling was the promise that we would be able to forecast what the car will do at an intersection (left vs. right vs. forward), not on a highway (forwards only).

  2. Your first statement is contradicted by the paper I link to and Smets and Wouters (2007), for example. DSGE beats VAR and is a little bit better than equation by equation estimation. Your second paragraph is just not true. The promise of such models was to do good policy analysis. That’s not the same thing as forecasting.

  3. How much better are you doing by moving, for example, from VAR to DSGE? Does it matter for most applications? — For most question, assuming tomorrow = today + trend will do “good enough”.

    Re: #2, I AM talking about policy analysis… “intersection” = change in policy or other structural break. That’s why you need the “deep” parameters.

  4. Also, I should mention that our models, as do all scientific models, make predictions about distributions of data. Single draws from true distributions, single data points, CAN’T refute theory, any theory in any science.

    Even in experimental sciences, one experimental result won’t refute a theory. The experiment would have to be repeated to do so. And even then, the theory isn’t refuted until a new better theory, one that better fits the data, comes along.

    To fit the data better, perhaps current models need to incorporate financial multipliers or they need to become more disaggregate… I don’t know. If the second is true, maybe we need to admit that macroeconomics doesn’t exist (or some equally dramatic statement to pacify the nay-sayers) and we need faster computers. But to say the whole enterprise needs to be thrown out because of one data point is crazy talk.

  5. “good enough”

    I don’t know what you mean. This is science. There is no “good enough”.

    DSGEs are better for forecasting than experts and VARs. Thus, we should use them for forecasting. They didn’t predict this down-turn and they’re not perfect. So?

  6. Good enough for policy or whatever you want to use these tools for.

    My point? a) Normal times are different from “unusual” times; b) policy-wise, “unusual” times are far more important; c) ALL of our tools do bad in “unusual” times.

  7. My point for going on about distributions was that all points in time are unusual. Anyway, the distinction between usual and unusual doesn’t really get us anywhere. If “unusual” is defined by regime change or something unrelated to our models, then we can model unusual times and thus improve our models. If “unusual” is defined by when our models don’t work then by definition our tools are bad during unusual times. And they always will be no matter what they are. But this is exciting because its when our models don’t work that we learn something (or there’s opportunities to write papers and to get tenure).

  8. It would be very hard to sell the idea that a) our current models capture the true structure of the economy; and b) recent events are caused by really really unfavorable realizations of the exogenous shocks.

    Maybe it’s just me but I find it easier to believe that our models are missing important features of the structure of the economy, features that become quantitatively relevant in abnormal circumstances (when some constraint bounds, “panics” and whatnot).

    A “panic” model might capture recent dynamics very well but it might do poorly on 1976-2005, hypothetically speaking.

    So, yes, yes, improve models, DSGE is the only credible game in town and all that.

    I’m just saying… I don’t know exactly how much trust one approach or another is entitled to, on the basis of forecasting results such as those in the paper. Maybe I’m just confused, who knows.

Comments are closed.