The positive nature of normative analysis

When you first take a grad level economics class, it occurs to you the discipline appears to be “social physics”. Where you thought the discipline was “a tool for understanding and social criticism and an instrument for intellectual enlightenment” you begin to suspect its “a tool for social engineering and an instrument for progressive politics”. The suspicion morphs into a full blown conspiracy theory when you learn the math behind the welfare theorems.

But look at how normative analysis is actually done in economics. Consumers of models compare their intuitions about how policy should work against the prescriptions of the model and use this difference to evaluate the model. “Wow, this model says money is too loose right now. I know that’s wrong, why does the model get it wrong?”

One of the reasons to reject real business cycle models is because they provide no room for macroeconomic management. The dissatisfaction with exogenous growth models is that they’re silent about institutions. There’s a heavy bias against models of wage inequality that rely on transient features of workers; the problems *must* be structural. Unemployment must be responsive to aggregate demand manipulations and so they can’t be a result of real frictions. And so on.

Despite these examples, after a couple years of doing this, I’ve come to believe that this is an acceptable way to evaluate models. There is no THE MODEL of the economy and our intuitions are basically trustworthy when it comes to social arrangements. Our intuitions are data. If a model gives counterintuitive policy implications, it bears the burden of showing us why our intuitions are incorrect.

8 Responses to “The positive nature of normative analysis”

  • Kevin Dick says:

    I predict you will go through another phase of evolution where you begin to suspect that our intuitions are not so trustworthy after all.

    Our intuitions about social arrangements evolved to be useful only up to our Dunbar number (~150 people). Now, you could argue that they might extrapolate an additional order of magnitude or even two given better communication technology and recent brain evolution. But you’re still three orders of magnitude short of the number of people in the US economy.

    If you believe in theories of emergent order in complex system, you would actually expect the unexpected in such a large network of interacting agents.

  • pushmedia1 says:

    You’re right about intuitions in absolute terms, but intuitions relative to formal models are still pretty good.

  • Kevin Dick says:

    That’s certainly not true in most contexts. A simple linear model (even one with random weights) does better than human judgement as tested in hundreds of different probems. See:

    http://www.tc.umn.edu/~pemeehl/167GroveMeehlClinstix.pdf

    http://www.amazon.com/Rational-Choice-Uncertain-World-Psychology/dp/076192275X (Search Inside the book for “linear model random weights”)

  • pushmedia1 says:

    Hmmm… thanks for those links. Very neat data. I should have been more careful with what I said.

    Here’s my argument: there’s a very large number of formal models, only some of which actually get written down. You agree that intuitions are better than the typical model from this large set of models?

    Intuitions *about* models are data. The intuition of experts help decide which models actually get written down and studied and so the papers you cite suffer from a sort of selection bias… the formal models they look at were already subjected to the scrutiny of experts and their intuitions.

    I’m not arguing that the intuitive mode of analysis is best or that we should forget models. Models are necessary to clarify thought. However, when we choose between models, our intuitions can be (must be?) a good guide.

  • swong says:

    Are you talking about individual intuitions or sets of intuitions (like as in, say, a prediction market)?

  • Kevin Dick says:

    Here’s where I think we agree. Humans are very good at hypothesizing which models within modelspace might be useful. Or to put it another way, humans are pretty good at choosing candidate predictor variables.

    However, humans are very bad at intuitively computing the _results_ of the aforementioned models. For that, pen and paper are much better, let alone a computer.

    So you have to ask yourself, does your intuition questions the model specification or the model result? In the cases you mention in the OP, it seems like you were questioning the model results, which I think is wrong.

    Just like in the thread about data mining, you have to be very careful about a priori vs a posteriori reasoning. If someone puts forth a model whose specification a priori agrees matches your intuition, but whose results don’t, believe the results. If, before looking at the results, the specification seems fishy, believe your intuition.

  • pushmedia1 says:

    swong, I think individual intuitions because, in the end, its individuals that write and study models. Intuitions, though, are formed in groups.

    Kevin, I think there should be a prize for when people agree so much…

    I’ll just note that my post confused a positive model of how economists choose models… they tend to like those that give policy prescriptions they agree with… and a normative statement about that fact. I was uncomfortable writing the last paragraph (“…this is an acceptable way to evaluate models.”) and you’ve articulated why I was uncomfortable. That said, its plain that economists pick models because they like their prescriptions.

    When things get meta-meta, I get confused.

  • Kevin Dick says:

    Economists are people too.

    Relevant story. I took Judgment and Decision Making from the late, great Amos Tversky in grad school. While he had some very interesting experimental results, the best thing I learned from him was the answer to a question in class:

    “Professor, isn’t it simply a matter a of understanding cognitive biases and compensating for them?”

    “Sadly, no. Perceptual psychologists know that the clarity of the air on a particular day affects estimates of distance. But they are no better judges of distance than anyone else. That’s why we have rulers and rangefinders.”

    The out of sample test is the rangefinder of social science models.