Experimental science isn’t “hard”

Angus Deaton on the “project evaluation” craze:

Randomized controlled trials cannot automatically trump other evidence, they do not occupy any special place in some hierarchy of evidence, nor does it make sense to refer to them as “hard” while other methods are “soft”. These rhetorical devices are just that; a metaphor is not an argument… thirty years of project evaluation in sociology, education and criminology was largely unsuccessful because it focused on whether projects work instead of on why they work.

and

The wholesale abandonment in American graduate schools of price theory in favor of infinite horizon intertemporal optimization and game theory has not been a favorable development for young empiricists. Empiricists and theorists seem further apart now than at any period in the last quarter century. Yet reintegration is hardly an option because without it there is no chance of long term scientific progress.

and after listing a number of papers that he thinks have a good mix of theory and data, he says:

In all of this work, the project, when it exists at all, is the embodiment of the theory that is being tested and refined, not the object of evaluation in its own right, and the field experiments are a bridge between the laboratory and the analysis of “natural” data.

Science is about finding underlying mechanisms. Its not about testing hypotheses and

[H]eterogeneity is not a technical problem, but a symptom of something deeper, which is the failure to specify causal models of the processes we are examining. This is the methodological message of this lecture, that technique is never a substitute for the business of doing economics.

I use the “project evaluation” and “experiment” rhetoric in one of my papers. I might have to rethink the organization of that paper…

6 thoughts on “Experimental science isn’t “hard””

  1. “Science is about finding underlying mechanisms. Its not about testing hypotheses.”

    Umm, the only way to demonstrate that you have found an underlying mechanism is to predict the output of that mechanism, either explicitly or implicitly. That is a test of the hypothesis.

    Or are you trying to say that testing hypotheses is a necessary but not sufficient condition for science?

  2. Well then, carry on.

    It occurs to me that looking at this in Bayesian terms might be illuminating. Science concentrates priors. If you’ve still got diffuse priors on events of interest, you haven’t advanced the state of the art.

  3. Deaton says: ” thirty years of project evaluation in sociology, education and criminology was largely unsuccessful because it focused on whether projects work instead of on why they work.”

    I would argue that randomization (if done right) guards against all possible omitted variables bias, even the ones you haven’t imagined yet (or are ideologically predisposed not to consider). For example: in education, lots of people once thought that of course spending more on inputs would lead to better outcomes, short-term programs like Head Start would lead to long-term gains, etc. Experimental evaluation didn’t support these hypotheses, so is that a failure… or maybe education is a field particularly prone to wishful thinking and unstated assumptions. In which case, you have to be sure that some treatment actually works before trying to figure out why it works.

  4. That’s true, but as Deaton argues such experiments only give you average treatment effects. If there’s heterogeneity, there’s more to uncover. If some people have much better outcomes because of treatment but everyone else has slightly negative outcomes, is the treatment a success? Also, in practice project evaluations aren’t generalizable. Deworming improved outcomes in village x, but will it work in village y? In all villages?

Comments are closed.