Oh, those silly economists

Economists dig incentives. Ask one a question about behavior and the economist will first consider the incentives that are at work.

Economics is often chided for only caring about incentives. There are other motives, like obligations or ethics, that guide people’s behavior.

Example,

Tyler also thinks [moving to a free access system] would reduce the problems associated with getting good reviewers. Since there are few incentives for reviewing anyway, we could simply bypass reviewing and allow the attention market to decide which papers were worth something. Henry is skeptical that there is a reviewer problem to worry about. People don’t review papers because they have incentives, he thinks; they review because it is normatively appropriate to do so.

Now here’s where I make an ass out of myself. Isn’t an obligation or a norm just a type of incentive? Surely these things don’t come with an infinite cost if they’re not obliged. As such, the individual’s behavior can still be analyzed as-if he’s analyzing the cost and benefits of breaking the norm/obligation.

“Aha”, the social scientists\economists ((That’s the set minus operator. Ain’t I clever.)) respond, “but treating those things as just other types of incentives isn’t parsimonious! Its not realistic!” To which, I reply, “I have a nice Friedman article from the 50’s for you to read.”

9 thoughts on “Oh, those silly economists”

  1. Conceptually, it works if you’re prepared to accept that cost/benefit analysis transcends the ebb and flow of cash money, at least in a biological brain.

    This reminds me of this post you linked to last week. Substitute “price of electricity” with “social currency.”

    Also reminds me of this movie.

  2. “There are other motives, like obligations or ethics, that guide people’s behavior.”

    Wait, aren’t those just variations or subcategories on the incentive theme? Oh wait. You said that. Well I’m on a roll anyway, so tough titty.

    If shirking one’s responsibilities or obligations has a negative affect on an individual, isn’t one incentivised (is this a word?) to handle those responsibilities in order to avoid the negative outcomes? Feel free to swap in the word “ethics” if you like. I don’t see the problem here.

    Is there a fallacy where someone takes too narrow a view on a word’s meaning, and then pretends that word can’t have a more broad context in order to attack the argument for being too narrow?

  3. If your answer to an “ought” question is “And what if I don’t?” then you’re simply not getting it. (Wittgenstein pointed this out.)

    People might follow rules for a number of reasons. That one “ought to”, is one of those cases, distinct in nature (as distinct as we can say any two things are distinct).

    For any action you can say, ex post, that it was (revealed) preferred, but this settles nothing and it’s improper use of language in that one would transplant a notion from where its use is clear and enlightening to a completely different area when it’s simply confusing.

    Why would someone kiss the photograph of a loved one, even if that someone doesn’t believe that the person in the picture will feel it? (Wittgenstein again.) Clearly, that person prefers to kiss it. Is that enlightening in any way? Are there incentives at work?

    It is unclear to me that when people kiss photographs, pray to gods, follow moral codes of conduct and the like, they’re doing something qualitatively different from what they do when they seek their own gain.

    People do this and that and everything and they’re different things. — To reduce everything to a single framework entails, eventually, some sort of metaphysical voodoo or trite generalizations and unenlightening platitudes.

    Why should it all be “preference maximization” or “norm following” or something else? It’s all these things, depending on what you’re doing (since some things can only be rule-following and the like).

    On the other hand, if we’re talking about scientific research, then yes, good science is reductionist, but it seems to me not obvious that all ought to be science.

  4. swong, is it the calculating itself that is corrupting? It seems to me no norm or moral is absolute, there’s exceptions to every rule and in the end the individual has to act, whether or not we deem his action ethical. If its not ethical calculus guiding his decisions, what is it? His intuitions?

    Is an unethical man less so because his intuitions, rather than cold calculations, guided his unethical behavior?

    But here’s really my point: does it matter that social scientists assume he’s making moral calculations, even if he’s not, if in the end our predictions about his behavior are correct? Besides, the answer “because its the right/normal thing to do” to every question about behavior, besides being laden with moralizing (who gets to decide what’s “right” or “normal”?), is really unsatisfying.

  5. Gabriel, I don’t see any harm in modeling agents as-if they’re calculating the costs/benefits of breaking a norm. That may not be what any one individual does, but in equilibrium we’ll see behavior (in aggregate across individuals or across time in the same individual) that looks as such.

    Everything may not be science, but if you’re explicitly trying to understand behavior, then you’re practicing science. Its fine for the moral philosophers to tell us what we ought to do, but if we want to understand what people will actually do, that’s a different story. In that instance, we know people care about what they ought to do, but that is just one of many incentives they’ll consider when they take actions. It seems to me, then, the incentive framework is more general than the normative framework for understanding and predicting behavior.

  6. Ethical calculus yes. Like dog calculus; rarely performed at a conscious level. Humans are social animals, and as such, have evolved brain structures to perform this analysis in a constant stream “at the hardware level.”

    What is an incentive, at the level of an individual? I’d argue it’s an option that promises pleasure (and pleasure is objectively observable). What is a disincentive? The converse; an option that promises displeasure. What sets the weighting of these conditions? Probably culture, with some genetics mixed in. Are norms just a standardized incentive structure?

    The example of kissing a photograph is brilliant. When you kiss a loved one, how do you know that they feel it anyway? Telepathy? Does part of their brain state cause some kind of resonance by exchanging, um, ionized quantum tachyons with your own grey matter? Or does sense data stream back in through your own nervous system telling you that, yes, they’re there, yes, they’re responding. Is it possible that performing the motions on your end can trigger part of the pleasurable response? How about kissing a loved one through a window? Or a video phone? Or a video of a loved one kissing the camera?

  7. Fair enough. My problem is not with modeling, where you can use whatever assumptions and identification strategy you’d like.

    My problem is with conceptual reductionism, which require, at some point, some silliness about how we use language and how metaphysical we allow ourselves to get.

    There is a moral phenomenon in the world out there, people doing some things… as a matter of positive science/description. These phenomena are real and it might not sense us to try and square the circle and fit them into a framework where they have no place, for the sake of aesthetic considerations.

    Now, I think that these moral phenomena are particularly misguided, but that’s my problem.

Comments are closed.