Foreclosure deadweight?

Mike suggested for those interested in the foreclosure issue this paper.

That paper says this paper estimates the deadweight costs of foreclosures.

No it doesn’t ((This is one of those cases where something is so blindingly obvious that I think to myself that I must be missing something at a fundamental level. If this is the case, please school me.)). It measures (or attempts to measure) the price discount of foreclosed homes. It turns out for observably equivalent houses, foreclosed houses sell for 20-25% cheaper.

This means one of two things:

  1. There’s some unobserved thing about foreclosed homes that make them worth less
  2. Banks (or owners of the foreclosed property) sell them at a discount

Neither of these things is an inefficiency. In the first case, the price just reflects fundamentals. In the second case, the banks loss is exactly balanced by the buyers gain.

Now, what is inefficient about letting underwater houses foreclose?

Hints about the stochastic discount factor

Its a great mystery (if you assume asset return volatility is all that matters) why investors don’t hold more stocks. Sharar Pitaru at the neat new investment tool Plantly, says:

Yes, in general our rationale might lead to a greater investment in bonds then the ‘80/20 rule’ calls for.

I’d suggest that the 80/20 rule (and other ‘rules of thumb’ like it) carry hidden dangers to investors, despite how neatly they may organize things in our minds. The danger stems from the inability of such over-simplified rules to account for the human factor:

The rational for an 80/20 portfolio is that it is destined to fluctuate widely with the market, yet it is also predicted to provide good returns in the long run. Therefore it suggests that investing in 80% stocks makes sense for the long term. Notice that this rational works only if we really keep our money intact for the duration of the investment. But how many of us really do?

Unfortunately, in reality, investors have many external and internal reasons to break an investment when the market goes sour. For example, many investors ‘broke’ their 80% stock investment earlier then planned for because they were scared of losing it all. Other investors just had an emergency (like losing their job) and needed the money sooner then expected – smack in the middle of the crisis. In both cases, these investors missed the general tendency of the markets to bounce back and provide positive returns for the long run. If either of these investors understood the risks involved in breaking the investment sooner, they might have not chosen the 80/20 portfolio to begin with.

My second gripe with the 80/20 rule is that it reflects the tendency to consider all bonds as safe and all stocks as having the same level of risk. In reality this is not the case. I’d suggest that other types of mixes are also viable – even for long term investments – depending on the specific bonds and stocks that are being used.

Back to Plantly – our diversification rational is to create an investment plan that aims towards your chosen target return while reducing its risk as much as possible. And yes, sometimes this calls for a greater investment in bonds as you’ve noticed – but this only works with the right kind of bonds, when mixed with the right kinds of stocks.

(emphasis added)

Investors want to have investments that pay off when their marginal utility is low high. This is a bitch to model and measure (poor, poor economists that *have* to spend their careers obsessing about this problem), but its a pretty easy thing to believe.

A model

Blanchard and Gali (2008) incorporate unemployment into the standard model. They have some interesting findings, but this one stuck out:
Unemployment under inflation targeting
This is the response of the unemployment rate over time to a 1% decrease in productivity of the economy (a “real” shock) WHEN the Fed has a inflation-only target (i.e. it doesn’t care about unemployment). Blue is the response in the model when its calibrated to look like the American economy and red is the response when its calibrated to look like Europe. Unemployment keeps increasing after the shock for a couple periods and then gradually declines. Look familiar?

Here’s the same graph but for when the Fed has the *optimal* policy of targeting a weighted average of inflation and unemployment:
Unemployment under optimal policy
Unemployment still jumps but it doesn’t have the hump shape. More importantly, notice how little the unemployment increases when the Fed is following optimal, or best, policy… almost an order of magnitude difference in the response of unemployment.

What does this mean? This paper gives us two ways to interpret what happened since last Fall. One, there was a small to medium sized real shock but because the Fed cares too much about inflation, unemployment sky rocketed. Reality looked (and looks) like the first graph. Or two, the Fed is following optimal policy but there was a huge real shock. Our reality is more like the second graph, but amplified.

Ironically, those making a bunch of noise about banking regulation, centering the blame for the recession on the financial sector, are arguing for the second graph. My impression is that those people are also more likely to be agitating for more aggregate demand policy. The second graph has the Fed acting optimally, i.e. there’s no need for further stimulus because the Fed is doing everything necessary. If you buy the logic of this paper, however, you can’t have it both ways.

PS – This model also has a positive response of inflation to the real shock. Given we saw a negative response last Fall there’s still room for a monetary shock in the story.

EMH and “The Market”

In comments, Mike linked to one of his posts from a couple months ago where he quotes William Easterly:

The most important part of the much-maligned Efficient Markets Hypothesis (EMH) is that nobody can systematically beat the stock market. Which implies nobody can predict a market crash, because if you could, then you would obviously beat the market. This applies also to other asset markets like housing prices.

Mike goes on to describe a paper called The Limits to Arbitrage. Under the realistic assumptions that only big players arbitrage and there is an agency problem where those big traders have information about fundamental prices that the people that hire them do not ((Realistic assumptions, yes, but how sensitive are the results to these assumptions? and how would they interact with other behavioral assumptions (e.g. why did principals hire the agents in the first place)? what happens as the arbitrageur uses more and more of his own money?)), even these big traders can’t stay solvent long enough. Thus, Easterly’s contention is false.

I don’t see the connection. Nothing in that paper suggest market prices don’t incorporate all available public information. Yes, arbitrageurs know more than the public. If their principals allowed it, they’d make trades on that information, but they don’t so they can’t. There are no profit opportunities.

The intellectual history of the EMH is basically:

  1. Fama names the EMH
  2. People test EMH making assumptions about asset pricing (e.g. CAPM)
  3. Early tests don’t reject EMH, but later tests do
  4. Fama observes these rejections are of the joint test of the asset pricing model and EMH… the problem could be with the asset pricing model
  5. People try different pricing models and discover there aren’t any good ones (i.e. the joint test keeps getting rejected)
  6. The Limits of Arbitrage suggests the problem is with liquidity constraints
  7. People again say this is a problem with EMH
  8. Cue Fama…

Its funny. I’m beginning to think that EMH isn’t a testable hypothesis. Its a subsidiary hypothesis, like in cosmology the assumption that the physical constants are in fact constant, that makes it possible to test other hypothesis. Without the EMH, we can use any ad-hoc explanation to explain any particular behavior. Under this view, anomalies teach us about asset pricing models and not the rationality of markets.

True, Fama’s work depended on arbitrage having no limits, but it should be easy (as in, not easy at all) to redefine efficiency to include the liquidity constraints of arbitrageurs. Then we can say markets are constrained efficient. I have no idea what this would mean for the allocation of capital in the real economy (the paper points out that the extent of the agency problems vary by asset class but I don’t imagine asset classes are much correlated with “capital classes” or sectors of the real economy). And policy implications?