Local analysis in macroeconomics

The substantive piece of John Quiggen’s criticism of New Keyensian macro is that the analysis is limited to looking at deviations of the economy around some set point. If the economy gets far away from that point, the analysis may not apply. His mistake was assuming the set point was the Smithian or neoclassical ideal, but in the NK models this isn’t the set point.

Nevertheless, to analyize these models, we do local analysis. Everybody does local analysis, so this criticism actually applies not just to New Keyensian macro but to most modern macro. I’ve actually been worried about this problem sense I realized this is what I was doing when I solved those models.

Ken Judd — Stanford’s computational economics god — gave a seminar at Davis yesterday and he made me feel much better about these techniques. While discussing this paper, he defended so-called perterbation methods by appealing to the authority of physics. They do it, so its ok for us to do it. Apparently, to solve the equations of general relativity, physicists perterb the system around the no mass solution. In other words, the set point that they do local analysis around is a universe with no mass in it! Not only that, because they’re doing local analysis (and like in economics “local” is undefined), their analysis suggests the universe is stable only up to an undetermined point of time. Outside that time interval, the local area of analysis, the results of general relativity may not apply ((BTW, IANAP (physicist) and neither is Judd, but he’s a smart guy and as long as I didn’t misunderstand him, I believe what he told us. Corrections are welcome.)). Yike!

That said, there are still problems with the way most macro folks do local analysis — we linearize when we probably should use higher order local approximations ((Although, a professor told me there’s very few cases where linearization can lead you astray. Unless you’re studying precautionary motives or other behaviors that depend on second moments, linearization should be ok.)) — and if all of us picked up a copy of Judd’s textbook (and read it!) we’d be better off.

Ideology in macroeconomic models?

From a good book review of a book I’ll never read:

The early pioneers made negative arguments. Because of unquantifiable uncertainties (Knight), the fragmentation of knowledge (Hayek), the lack of timely data (Simons and Friedman), and/or the lack of suitable motivation (Buchanan), we cannot expect policymakers to engineer results that are superior to those that emerge spontaneously in a competitive market economy. With such negative arguments, it was difficult to attract adherents in large numbers… Lucas, by contrast, offered a positive argument. He brought laissez-faire into play up front as a modeling technique, rather than saving it as a possible policy recommendation. As a consequence, the macroeconomic modeler of the late 1970s and early 1980s could make full use of the mathematical techniques already in the economist’s tool box, could learn some new modeling techniques that were part and parcel with new classicism, and could possibly develop still more techniques to push the envelope of this new mode of theorizing. Devising so-called fully articulated artificial economies, calibrating the models on the basis of actual movements in real-world macroeconomic magnitudes, subjecting the model economies to hypothetical shocks, and making predictions on this basis occupied many practitioners…. Further, an ideological taint attaches to Lucas’s new classicism… because the tools that Lucas borrowed from Friedman were themselves influenced by Friedman’s ideological commitment to laissez-faire.

Get that? Lucas’ methods — a label I’d give to all methods of modern macro — are ideologically tainted because someone who once used them was an ideologue and because those methods happen to produce (under some sets of assumptions) laissez-faire policy prescriptions.

This is why economics is hard. There’s a mysterious political smell to our models that other sorts of models don’t have.

“Einstein was a Republican so relativity is a back-handed conservative conspiracy!”

Even though the structure and workings of economic models are right there for everyone to see, they’re assumed to be twisted by ideology; their outcomes invalid under some other world view. Crazy.

Wanted: equilibrium mass psychology theory

Results from psychology or experimental economics don’t (as in DO NOT) translate directly into equilibrium behavior. This is because one person’s psychological bias may cancel out another person’s bias. Or even if people have anomalies that systematically bias behavior, the presence of a few non-biased actors may reverse the bias of the rest.

Psychologists usually report the systematic sorts of biases, so there’s not very many examples of the first kind of equilibrium outcome in that literature. That said, examples of human impreciseness are apparent. As individuals, for example, we’re imprecise estimators of value. For every potential widget buyer that overestimates the value to him of a widget, there’s a potential widget buyer that underestimates its value to her. The price will be set somewhere in between and would be equal to the price set if the buyers didn’t make mistakes ((needs cite… I’m sure I’ve seen results like this somewhere)).

On the other hand, psychologists and experimental economists love to report systematic biases, so-called anomalies. The examples of these are numerous. Just today, The Economist cited evidence of the money illusion ((BTW, guys, the relevant parts of economics, i.e. macro, know all about money illusion and don’t deny its existence. I’m not sure why this one finding “refutes” anything.)) and here they are talking about regret. It turns out people feel buyers remorse soon after buying, but after a while they start to think they should have consumed more.

In the case of money illusion, we know — in, like, an empirical sense — that money is neutral in the long run; doubling money will double prices. In other words, there’s some mechanism at play that corrects for this psychological bias. In the case of the finding about regret, its not clear at all if this would have systematic effects. Does the long term effect swamp out the short term effect? Does either effect have real affects on consumption decisions?

One way to assume these anomalies have macro effects is to assume all agents have perfectly coordinated states of mind (e.g. every last investor becomes chicken little). This isn’t exactly a bleedingly obvious assumption, so my theory of equilibrium mass psychology sucks. Can you do better?

We can’t just assume individual instances of “irrationality” aggregate up into systematic irrationality. Given our general ignorance of these individual psychological effects, their aggregate properties and the mechanisms underlying this aggregation, a theory like rational expectations that assumes anomalies wash out seems prudent to me.

Now excuse me, I need to go read Vernon Smith.

Individual psychology != mass psychology

I haven’t been following the bail-out debate between Krugman and Delong, there are dissertations to write. Jonah Lehrer claims to summarize the disagreement. Krugman thinks toxic assets really aren’t worth that much and Delong thinks investors are risk averse en masse. Lehrer adds:

I think one way to evaluate these dueling positions is to look at how people generate perceptions of risk. Investors have concluded that these toxic assets are simply too risky to invest in, at least without large infusions of government money. How rational are these perceptions of risk? Are investors wary of buying toxic assets because they have good evidence that the toxic assets are virtually worthless? Or are they wary of these investments because they’re irrationally scared?

He then goes on to cite some evidence from psychology that people can mis-perceive risks.

The problem is “investors” aren’t a person. “Investors” don’t have a psychology, they have psychologies. There’s no telling how those psychologies aggregate. For example, all it takes is one big non-Delongian investor to fix the supposed high risk-aversion of “investors”. This big, risk loving investor would buy up all the toxic assets because he’d know they’re a steal.

The fact that no such investor has materialized is support for Krugman’s point of view. These are crappy assets.

Delong’s response is most likely something about the government (i.e. the U.S. federal government) being the only investor big enough to ride out the wave of pessimism. To which I say, really? There’s no big buy and hold institutional investors? There’s no sovereign wealth funds? All the PE funds have suddenly lost their long horizons? There’s no other big governments?

Is Delong going to invest in these assets? He has tenure and he’s far away from retirement.

For mirror polishing

Eric Rauchway has an interesting post on objective historians or objective history making (or whatever the work of historians is called). He says you can’t do it, so you shouldn’t. To me, he’s mixing method and purpose. Of course, you have purpose when you’re doing research and this purpose colors your method. If you think FDR and the New Deal was the greatest thing since sliced bread, this will tend to have you favor facts that support that conclusion.

Objectivity is a discipline, a tool, for exploring reality. Its not the end, only the means. Rauchway believes historians shouldn’t attempt to separate purpose and method, they shouldn’t attempt objectivity because perfect separation is impossible. This is like saying athletes shouldn’t practice because not everyone can be Micheal Jordan.

The completely objective person is an instrument; he doesn’t have a soul. This was Nietzsche’s point when he called the scientists of his day self-polishing mirrors. The point, as should be obvious, isn’t that one shouldn’t polish one’s mirrors. Polishing is ok; just being a mirror isn’t.

At the recent “Stimulus Smackdown” here at Davis, Rauchway got up to ask the panel a question. Before he did so, he produced the throw-away line “I’m a historian; we don’t do models.” I know he was joking, but this is completely ridiculous. Of course, they do models, they just don’t explicitly write them down. This makes the job of objectivity hard, but I guess it allows the historian to be more whimsical. As a consumer of his product, of history, I’m not sure what his whimsy buys me, though.

Adam and Eve in the garden of Friedman

You’ve heard before that Christie “I heart fiscal policy” Romer wrote a paper claiming monetary policy ended the Great Depression and that it ends recessions in general. I thought this criticism of the latter paper was interesting:

Romer and Romer completely ignore all of this literature. There is not a mumble of an apology in the direction of Tobin and Solow’s methodological concerns, much less their formal statements by Sims and others. Despite its fundamental importance for identification, there is not a hint of a reference to monetary theory, even David Romer’s thesis or the collection of papers in his book with Greg Mankiw (1991). The empirical findings of the huge VAR literature go unmentioned (with one lonely exception). The paper reads as if Romer and Romer are the first to ever examine recognition, decision, and action lags at the Federal Reserve.

The underlying economics, like the empirical methods, is straight from the 1960s: The paper does not ask whether the economy returns to a natural rate without policy intervention; the 1970s challenge that systematic policy might have no real effects is not even dismissed, to say nothing of the 1980s challenge from stochastic growth models that not even the beginnings of recessions need policy shocks.

The omission is so glaring it must be intentional. Here is my — quite sympathetic — interpretation. The last 30 years of macroeconomics are difficult, and the period hasn’t provided firm answers to the earlier questions. VARs address Tobin and Solow’s criticisms, but lots of problems remain. One has to identify shocks from the residuals, consider the potential effects of omitted variables, and worry about whether the AR representation, MA representation, or some combination is policy invariant. Identification isn’t easy. The empirical results are sensitive to specification; the standard errors are big, and one ends up with the impression that the data really don’t say much about the effects of monetary policy-which may in fact be true. Theoretical models seem equally sensitive to assumptions and do not connect easily with empirical work.

We’ve been at this over 30 years, and look how little progress we have made toward answering such simple questions! Can understanding monetary policy really be so difficult? Why don’t we just throw all the formal methodology overboard and go read the history of obvious episodes and see what happened? If, like me, you have struggled with even the smallest VAR, this approach is enormously attractive.

Perhaps this is Romer and Romer’s motivation. But if so, I think that Romer and Romer are falling into the same trap that ensnared the rest of us. Perhaps they started with a desire to just look at the facts. But then they wanted to make quantitative statements. How much would output have changed if the Fed followed a different policy? To do so, they reinvented the St. Louis Fed approach-an econometric technique. Despite the desire to “do something simple” (David Romer, during the discussion), they in fact evaluated policy from the autoregressive representation of an output-fed funds VAR. Now they face Tobin and Solow’s classic causal and identification problems, which cannot be addressed by quotes from FOMC meetings.

Adam and Eve in the garden of Friedman, they have taken one bite of the forbidden econometric fruit. But the serpent (me) is still there, whispering “go ahead, just add a few more variables;” “you can fix that, just put in a Fed reaction function;” “Why don’t you write down a few structural models and verify what your regressions are picking up?” I don’t see how they can resist taking bite after bite, until they are cast out of the garden, explicitly running VARs, and working hard for identification with the rest of us.

This sounds like the dialog in my head when I’m reading Prof. Kling. Then there’s this line: “VAR methods did not evolve as recreational mathematics. They evolved as the best response a generation of talented economists could come up with to genuine and serious concerns.” The same goes for DSGE models and modern theory. Macroeconomics is difficult and its frustrating that we don’t know more.

Buiter, good macro critic?

Google scholar and a minimal knowledge of the DSGE literature allows one to refute Buiter’s claims in less than 10 minutes. Someone should really give him a quick lesson on Google scholar.

No complete markets? Nope (first page of “dsge incomplete markets“).

No asset bubbles? Negative. (third result from this search: “dsge asset bubbles“)

Only linearization? Not really. (first result of this search “dsge nonlinear solution techniques“)

Can we have better macro critics please? Like this one or this one, I mean.

Technological regress

An old joke about Real Business Cycles is that they assume recessions are caused by technological regress. Without other frictions, the only way for an economy to get lower output given the installed capital and the number of workers is to have a sudden drop in productivity of those inputs.

Well, who in the hell ever heard of a technological regression? What, did people forget how make stuff? lose the blueprint? Ha ha ha… those stupid RBC theorist. What a bunch of mathematical masturbation!

Ahem. Well, here are three examples of technological regress. First, the financial mess can be seen as throwing sand in the works. Its harder to get working capital — if your bank is skittish you have to walk down the street to get your loans from somewhere else — so production is more expensive. Second, here’s Willem Buiter entertainingly complaining about centralization causing technological regress. Third, an increase in distortions in the finance sector where it is harder for some sectors to get financing can *look* like technological regress.

The last point was explained by Prof. Kehoe at the loooooooooooooong session on monetary policy at the AEA meetings (see day two part 1). He shows a simple single sector growth model with plain vanilla productivity shocks (i.e. technological regress) is observational equivalent to a more sophisticated two sector model with sector specific labor costs (e.g. costs in working capital). The more sophisticated model tells a story for why there is “technological regress” but it doesn’t necessarily tell us more about the economy. For that, the model would need to generate other testable predictions.

The RBC literature found “technological shocks” were important for explaining business cycles. Many, perhaps more conservative, economists took this result literally… variation in stuff policy makers have no control over, namely exogenous technology, cause business cycles so policy can’t help smooth cycles. For this literal interpretation of “technology shocks” those economists were rightly ridiculed, but the lesson of RBC models is exactly what it should be: these models identified a fundamental cause of business cycles and they pointed the way to a deeper understanding. To understand business cycles, we need to understand “technological shocks”. RBC models aren’t wrong; they’re just not right enough.

History lesson

I eagerly await Kling’s history lesson. Meanwhile, a real life working macroeconomist has this to say about the emergence of modern macro (which he calls New Macro… I guess I’ll have to defer):

Dynamic equilibrium theory made a quantum leap between the early 1970s and the late 1990s. In the comparatively brief space of 30 years, macroeconomists went from writing prototype models of rational expectations (think of Lucas, 1972) to handling complex constructions like the economy in Christiano, Eichenbaum, and Evans (2005). It was similar to jumping from the Wright brothers to an Airbus 380 in one generation.

A particular keystone for that development was, of course, Kydland and Prescott’s 1982 paper Time to Build and Aggregate Fluctuations. For the fi…rst time, macroeconomists had a small and coherent dynamic model of the economy, built from …rst principles with optimizing agents, rational expectations, and market clearing, that could generate data that resembled observed variables to a remarkable degree. Yes, there were many dimensions along which the model failed, from the volatility of hours to the persistence of output. But the amazing feature was how well the model did despite having so little of what was traditionally thought of as the necessary ingredients of business cycle theories: money, nominal rigidities, or non-market clearing.

Except for a small but dedicated group of followers at Minnesota, Rochester, and other bastions of heresy, the initial reaction to Kydland and Prescott’s assertions varied from amused incredulity to straightforward dismissal. The critics were either appalled by the whole idea that technological shocks could account for a substantial fraction of output volatility or infuriated by what they considered the superfluity of technical fi…reworks. After all, could we not have done the same in a model with two periods? What was so important about computing the whole equilibrium path of the economy?

It turns out that while the …rst objection regarding the plausibility of technological shocks is alive and haunting us (even today the most sophisticated DSGE models still require a notable role for technological shocks, which can be seen as a good or a bad thing depending on your perspective), the second complaint has aged rapidly. As Max Plank remarked somewhere, a new methodology does not triumph by convincing its opponents, but rather because critics die and a new generation grows up that is familiar with it. Few occasions demonstrate the insight of Plank’s witticism better than the spread of DSGE models. The new cohorts of graduate students quickly became acquainted with the new tools employed by Kydland and Prescott, such as recursive methods and computation, if only because of the comparative advantage that the mastery of technical material offers to young, ambitious minds. And naturally, in the process, younger researchers began to appreciate the ‡exibility offered by the tools. Once you know how to write down a value function in a model with complete markets and fully ‡exible prices, introducing rigidities or other market imperfections is only one step ahead: one more state variable here or there and you have a job market paper.

Hey, I represent that last remark!