It is profound and earth shaking…

… every time I run into a statement like this: “But all interesting models involve unrealistic simplifications, which is why they must be tested against data.”

We can know things a priori, but we can’t know how important those things are a priori. A model can tell us how a particular mechanism operates, but it can’t tell us how important that mechanism is. Effect size can only be determined by looking at the data.

I’m sure there’s something wrong with my brain that can’t internalize this fact. I just read Landsburg’s Big Questions and this was his major theme. (BTW, I liked the book and here’s a review at /.)

6 thoughts on “It is profound and earth shaking…”

  1. Well, even worse… all of those mechanisms must be compatible with each other, in general equilibrium.

    The Frisch elasticity is one. (I mean a single elasticity in all mechanisms, not “=1”).

  2. You mean models have to be internally consistent?

    To me, that’s not profound. That’s just what makes sense. If a model isn’t internally consistent then it doesn’t even provide a priori knowledge.

    What’s profound is that data isn’t used to “test the model”. Data is used to see how dramatic the effects in the model are in the real world. Logic tells us how X implies Y but we can’t “test” that logic. Its just true or its not. We test the importance of the implication in the real world. It may be the case that butterflies cause the price of orange juice to go up, but what we really care about it by how much.

    It seems to me, though, that there’s something else to the story. We usually like theories where X is a list of palatable assumptions. A crazy theory linking the price of twinkies to the business cycle — even if it was internally consistent and all tests of its implications were passed — would still be crazy.

  3. A Bayesian would say that our priors are also a form of data. So we should rightfully give a twinkie theory low credibility, since it is not supported by our prior beliefs on what kinds of mechanisms are plausible, and our priors are presumably generated by all the data we’ve seen in the past (if we’re rational, updating, and all that).

Comments are closed.