Of policies and goals

There’s what you want to accomplish and there’s the best way of accomplishing those goals. It seems there’s often little disagreement on what people want to have happen; help the poor, protect the environment and encourage growth, for example. It seems obvious that policies should be chosen that best implement these goals; negative income tax, Pigovian taxes and subsidies on basic research, in order.

It is ironic that most policy debates get heated and start sounding like disagreements about normative goals when they’re just arguments about what policies are best at attaining common goals. The passion should be in normative fights, not normative-sounding fights. Positive analysis of policy’s effects are more or less boring, but its these discussions that I see become most heated (e.g. the stimulus debate).

Perhaps this is because those debating suspect their opponent is hiding some agenda. They suspect their opponent doesn’t really share their normative goals. They think the opponent’s “so-called” positive analysis of the policy in question is tainted by a desire to see policy not work.

What’s funny about this is that if policy analysis is done in a transparent way, one should be able to discern if the analysis is flawed. Why question the motives of the arguer when the argument can be criticized directly?

This line of reasoning only works for positive analysis of policy. Normative goals come from the ether and there’s no reason to think people will agree on goals. So perhaps there’s a different phenomenon explaining the passionate disagreements over policies. People simply confuse positive analysis for normative. They associate certain policies — minimum wages and rent controls, subsidies for “green” technologies and direct R&D, in order — with normative goals. Any opposition to those policies is prima facie evidence of opposition to their objective.

If this is the case, its really annoying. Knock it off, please.

8 Responses to “Of policies and goals”

  • Kevin Dick says:

    Another thing to consider is that while many people may agree on a set of normative goals, they will agree less on the ordering of those goals, and even less on the quantitative tradeoffs among goals.

    Therefore, the problem in transparent policy is much more nuanced than suspecting that others don’t share your normative goals. It’s about to what extent the policies someone else is advocating may secretly bias the tradeoffs among goals in ways that you don’t prefer. Add to this that a large percentage of politicians have a normative goal of increasing their individual social status and it’s not easy to have a transparent policy debate.

    Compound this by the tendency of humans to conflate terminal and instrumental goals and you’ve got a big fat hairball.

  • pushmedia1 says:

    You’re depressing me! Why can’t these other normative frameworks, and their relative weights, be included in our positive analysis? If the disagreement is about the weights, then the argument should be about the weights. Alternatively, we could give different positive results predicated on the normative framework used and sorta average them.

    In any case, I see the issue addressed in my post even within the context of the debate of a particular policy. Take this thread. In the post, its obvious what the normative aims of the author are and they’re pretty uncontroversial. Encourage innovation. He goes on to make the no-brainer point that the government “picking winners” is a terrible policy to achieve this goal, but then he invites a discussion about the relative merits of government intervention in general. I’m not sure why he did that. Who cares if and when the government succeeded or failed to encourage innovation? Such an inventory won’t tell us much about optimal policy.

  • Jesse says:

    Have you done a cost-benefit analysis to see if pursuing this full separation of positive and normative analysis will actually lead to better societal outcomes? Are there any field experiments we can look at?

    I think it’s worth remembering that political discussions are fundamentally about values and allegiances. For example, in this post you push forward the values of objectivity, dispassionate analysis, and clarity of assumptions. I am very sympathetic. But there are also things you are leaving out in your argument. In particular, you are assuming that for a given set of normative beliefs, positive analysis makes unambiguous recommendations about policy.

    Let’s say we gave politicians the job of articulating their value systems, and then policy analysts used positive analysis to come up with policy recommendations. So we had a values sphere and an analysis sphere, the nice clean rational split that you are pushing for. Note that the analysts require special training and much of what they do will not be clear to the public at large. Furthermore, it seems fairly likely to me that such analysts will often disagree with each other, sometimes dramatically. Someone must make a judgment call somewhere. What will happen?

    If dramatic disagreements within the analyst community are revealed to the politicians, they will lose faith in the separate spheres (what’s the value of an obscure approach that can’t decide if the proper size of the stimulus package is $0 or $2T) and the analysts will lose power and status. So analysts have an incentive to resolve such disagreements amongst themselves, so that they can retain their power; this is true even though positive analysis is insufficient to settle the debate. In other words, the result will be that political power shifts to the analyst sphere.

    This would be good news for analysts, but bad news for politicians, and it may also be bad news for the public at large. Or perhaps not; maybe it would be great. Anyway, I’m trying to clarify some of the forces arrayed against the clean positive/normative split. I don’t think they are all a result of irrationality, ignorance, or bad incentives. I think the general issue is that people are somewhat suspicious of positive analyses, they feel that something important might be left out, and they actually have more confidence that values matter. Now sometimes this idea is definitely wrong, because people’s intuitions are incorrect and because they don’t understand how the world really works. But is it always wrong? In a world where the future can be genuinely uncertain, don’t values and commitments make a difference?

  • Jesse says:

    I should clarify that I also think it’s incredibly annoying and dispiriting to see people debate positive claims as if they are normative claims. My explanation is that 99% of the time people are engaged in normative debates, and that they don’t really care about dishonoring the positive sphere, which makes for lousy and unsatisfying intellectual discussion. But I guess my suggestion is that this popular tendency might not be 100% irrational and stupid.

  • Kevin Dick says:

    In the best of all possible worlds, the weights would be explicit. But have you ever conducted an interview with a normal _individual_ to get them to try and budget how much they think the government should spend on education versus the environment or any other tradeoff of highly desirable normative goals, especially when outcomes are uncertain?

    Even getting them to do it in a business context where it’s their money and they have tremendous incentive to get the answer right, they often just go in circles. It’s probably the most frustrating thing a rational consequentialist can attempt and why I abandoned applied decision theory for software development.

    Try this with a group of people and you’ll want to shove a sharp needle in your eye just to stop the psychic pain of their compete irrationality. Now, some brave souls have tried a revealed preference approach based on actual expenditures. But it turns out that most people are time inconsistent over very short time periods! We can’t even pin down the statistical value of a life based on revealed preference by more than a factor of 4 and safety is a fairly homogeneous question by public policy standards.

    If you haven’t read it, go out and get _Rational Choice in an Uncertain World_ by Hastie and Dawes. You could get through it in a night or two and then you would be depressed but at least you’d understand why. To make it worse, read _The Accidental Mind_ by Linden to see how crappy the computing substrate of the human mind is.

    Honestly, you should be amazed that we can coordinate a society of hundreds of millions of people in any shape or form.

  • pushmedia1 says:

    Regarding “unambiguous” policy recommendations: Why does my argument require unambiguous positive analysis? If analysts are uncertain, then they give estimated intervals (or distributions) instead of point estimates. If policy makers are made to expect intervals instead of point estimates, then they can’t be disappointed about getting them.

    I agree, though, that model uncertainty isn’t taken as seriously as it should be. This is in part because we lack good tools to do robust analysis, but as Jesse points out, some of our institutions need to be changed to encourage more honest analysis. The incentive structure we have encourages experts to sound more certain then they are (or should be). Looking outside politics, scientific publishing encourages much less humility than I think is optimal. This may be a bug in our brains, but like all brain bugs we can work around it with better infrastructure. Kevin should be more optimistic we can overcome brain bugs given his observation about the complexity of society.

    Regarding disagreements among lay people versus experts: the comments in my post apply to everybody, but I would hope experts would be better than they are at separating normative from positive disagreements. See, for example, the stimulus debate.

  • swong says:

    I like your vision of an ideal world, but we seem to live in one where even doing something as simple and transparent as calculating the value of pi will get you labeled an “Irrationalist shill.”

  • Kevin Dick says:

    I actually am optimistic. At the micro level, I see challenges that appear insurmountable. But at the macro level, we’re a remarkably successful civilization. So there’s something going on in the dynamic interaction that we don’t really understand that is really to our benefit.

    However, I also think this means that it’s fruitless to wish for significantly better micro level behavior given the current state of our technology. Improved institutional design _may_ be useful, but I think every intervention will have to be evaluated empirically. I don’t think you’ll have much success building bottom up from first principles.

    That’s why I really liked Sumner’s idea of a Fed-sponsored NGDP futures market. It feeds back from the macro to the micro level. Those are things I think will work.