Wage-age profiles now and then

The red line is the wage-age profile from 1990 to 2005 and the blue line is the wage-age profile from 1968 to 1980. I just picked those years randomly. These profiles were calculated in such a way that what you’re seeing is average “within person” wage profiles over their lives ((I estimated this equation with 18 year olds the comparison group. The y-axis are the estimated two-way panel coefficient on the age dummy plus the average log wage for 18-year olds in the appropriate time period. Everything’s waaaay significant. These are heads of households with positive wages in the PSID. R code and the data set is available but its too big for me to post my hosted account so email me if you want it.)). This means there’s no funny business with changes in demographics or whatever:

In the good ol’ days, workers ramped up their wages early in their careers and then wages flattened out for the rest of their careers. In these evil dark ages of widening inequality, it takes longer for workers to get to their wages to peak and the peak is higher than before. Also, the peak comes so late there’s never a period of stagnant growth in their wages.

Who’s line is it anyway?

Here’s an interesting interview with Barro and Krugman in 2003 regarding Bush’s fiscal stimulus plan. My uncharitable eyes have Barro 2003 being more consistent with the priorities of Barro 2009 than the Krugman versions, but on the pro/anti-stimulus front the combatants have flip-flopped.

BTW, the most important fact about the economy bar none and don’t listen to any idiots who claim otherwise, THE one and only interest rate was getting close to zero in January 2003.

Labor wedge watch

People are having a hard time getting into nursing school.

“So many people want to be a nurse now that there’s just not enough room in the classroom and the hospitals to accommodate all those people,” said Bradford.

Her theory is more than personal opinion. Recent data indicates that nursing classrooms are filling up faster, and the fight to get in is tougher than ever.

In the Sacramento region, at least three colleges — American River, Chico State and Yuba — report significant jumps in nursing school applications since 2004.

“We’re hearing of people waiting two years before they get to the top of a (nursing school) waiting list,” said Spetz.

Why not open more schools, hire more teachers, etc. The article says because teachers don’t get paid enough and schools don’t have enough money. Something fishy is going on when demand for a good rises but prices and/or quantities don’t.

Unemployment and housing bubbles

As I mentioned in comments below, Casey Mulligan posted a working paper this weekend that argues this recession is consistent with shifts in labor supply. He gives evidence that this shift is due to increases in the labor wedge.

He gives two causes of increased labor market distortions. The first is talk by the IRS and politicians to be more lax in enforce against economically distressed tax payers. Of course, this would give an incentive to be viewed as “economically distressed” by the IRS. The second distortion unique to this recession is caused by the way distressed mortgages are handled. Rational banks and soft-hearted policy makers will decrease mortgage payments for “economically distressed” individuals. At some levels of income and some sizes of mortgage payments, then, there’s a 100% income tax.

Anyway, there are a million models that could generate labor market wedges. The name of the game is to find one that is also consistent with other salient facts. Mulligan’s mortgage story seems like a stretch but there is one testable implication that I can think of: The housing bubble affected different parts of the country differently. Homeowners in areas most bubbly will be more likely to be given incentives to reduce their labor supply because of offers (or expectations of offers) to decrease mortgage payments conditioned on income or employment status. Here’s a scatter plot of the change in the case-shiller index (a measure of the size of the housing bubble) by the change in the unemployment rate. These are year to year changes from November 2007 to November 2008.

Somebody much better than I at econometrics could see if this simple relation holds for reals.

(h/t Everyday Economist)

History lesson

I eagerly await Kling’s history lesson. Meanwhile, a real life working macroeconomist has this to say about the emergence of modern macro (which he calls New Macro… I guess I’ll have to defer):

Dynamic equilibrium theory made a quantum leap between the early 1970s and the late 1990s. In the comparatively brief space of 30 years, macroeconomists went from writing prototype models of rational expectations (think of Lucas, 1972) to handling complex constructions like the economy in Christiano, Eichenbaum, and Evans (2005). It was similar to jumping from the Wright brothers to an Airbus 380 in one generation.

A particular keystone for that development was, of course, Kydland and Prescott’s 1982 paper Time to Build and Aggregate Fluctuations. For the fi…rst time, macroeconomists had a small and coherent dynamic model of the economy, built from …rst principles with optimizing agents, rational expectations, and market clearing, that could generate data that resembled observed variables to a remarkable degree. Yes, there were many dimensions along which the model failed, from the volatility of hours to the persistence of output. But the amazing feature was how well the model did despite having so little of what was traditionally thought of as the necessary ingredients of business cycle theories: money, nominal rigidities, or non-market clearing.

Except for a small but dedicated group of followers at Minnesota, Rochester, and other bastions of heresy, the initial reaction to Kydland and Prescott’s assertions varied from amused incredulity to straightforward dismissal. The critics were either appalled by the whole idea that technological shocks could account for a substantial fraction of output volatility or infuriated by what they considered the superfluity of technical fi…reworks. After all, could we not have done the same in a model with two periods? What was so important about computing the whole equilibrium path of the economy?

It turns out that while the …rst objection regarding the plausibility of technological shocks is alive and haunting us (even today the most sophisticated DSGE models still require a notable role for technological shocks, which can be seen as a good or a bad thing depending on your perspective), the second complaint has aged rapidly. As Max Plank remarked somewhere, a new methodology does not triumph by convincing its opponents, but rather because critics die and a new generation grows up that is familiar with it. Few occasions demonstrate the insight of Plank’s witticism better than the spread of DSGE models. The new cohorts of graduate students quickly became acquainted with the new tools employed by Kydland and Prescott, such as recursive methods and computation, if only because of the comparative advantage that the mastery of technical material offers to young, ambitious minds. And naturally, in the process, younger researchers began to appreciate the ‡exibility offered by the tools. Once you know how to write down a value function in a model with complete markets and fully ‡exible prices, introducing rigidities or other market imperfections is only one step ahead: one more state variable here or there and you have a job market paper.

Hey, I represent that last remark!

I wonder why he didn’t look at the 70’s?

Krugman estimates a Phillips curve thus solidifying my belief that he, along with Kling, doesn’t know modern macro.

There is no structural relationship between output gaps and inflation. The correlation he estimates means nothing. Nothing. Certainly, you can’t, as he does, read off the chart future inflation rates using estimates of output gaps.

Expectations of inflation matter. We don’t know what expectations of inflation are because being expectations they’re in people’s heads. We can get some hints at what expectations are by looking at markets where people bet on future inflation rates ((Annoyingly, the Fed has stopped publishing statistics that translate the prices in these markets to a measure of inflation expectations. Not sure why they’ve postponed this analysis when it would be really, really nice to have it. Krugman, strangely given this recent post, looked at similar data a few weeks ago. He, of course, declared inflation expectations to be tanking, threating deflation. The problem with just comparing the TIPS rate and the treasury rate, like Krugman does, is there’s a liquidity premium. People will bid up the price of a bond that has a thiner market. Those two markets may have different premiums or the premium might swamp out differences in these rates due to inflation expectations. Greg Mankiw says using TIPS data to understand deflationary expectations doesn’t work because the payoffs on those bonds is asymmetrical. Anyway, while I suspect TIPS data is a squirrelly measure of expectations, I wish someone could explain this stuff to me.)). Another source of information about expectations is surveys of inflation forecasters (industry economists and the like). A recent paper by Mankiw, Reis and Wolfers looks at these data over the last few generations. Also, here’s some measures the Fed uses to get a sense for what people expect to happen to the price level. Notice “real” interest rates are positive which isn’t consistent with a Krugman’s definition of a liquidity trap (or what I call a weak liquidity trap).

Anyway, there’s no reason to think looking at historical inflation rates will tell us anything about what people expect future inflation to be. I think this is doubly so given the break down of traditional monetary policy. People won’t necessarily believe the Fed has as much control over inflation as they usually do. This goes for profession forecasters, too. They may not be so good at their job these days.

The extent of our knowledge

One of the reasons I think modern macro is successful is that it cleanly separates what we know from what we don’t know. Making assumptions about how people make decisions and assumptions about how those decisions interact, modern macro models give expected behavior. In our models we capture all that we don’t know in what are called “exogenous shocks”. They’re “exogenous” because they’re outside the model and they’re “shocks” because they’re exact value is unpredictable even if the economic agents inhabiting the model’s world knows their distribution.

Given the behavior of exogenous shocks and the predicted behavior of the model’s agents to those shocks, we can take the output of the model and compare it to real data. If a model replicates real data then its assumed this is due to the assumptions about shocks and behavior. Because the shocks represent things we don’t know, we’d like to have the most simply shocks possible and therefore have the structure of the model explain as much of the data as possible.

Similarly, we can compare models by seeing which explains the most patterns in the data (or at least the patterns we care about) using the same exogenous shocks. If one model explains more data, we say that one tells us more about whatever is being studied. The shocks, however, continue to reflect our ignorance.

I like this paper by Eggertsson (h/t EotAW) because it shows how important the assumptions on exogenous shocks can be. Cole and Ohanian have several papers (findings summarized here) that show New Deal policies (by which they and Eggertsson mean industrial and union policies increasing monopoly power) were contractionary. This is actually the standard view that follows from microeconomic theory.

Eggertsson shows that after you include some standard modern macro model features (e.g. sticky prices, monopolistic competition) and you change the assumption on the exogenous shocks, these New Deal policies are, to my surprise, expansionary. The reason is at the zero lower bound for interest rates, without these policies, expectation for inflation doesn’t materialize and the economy falls into a deflationary spiral. When the government gives unions and big companies monopoly power, on the other hand, people expect those unions and big companies to use that power to raise prices. In this way, New Deal policies take over traditional monetary policies roles in setting inflation expectations.

But I really, really like the paper because it makes explicit that the reason its results are different from standard results from Cole and Ohanian is because of the assumptions about the shocks. In both sets of research, the shock is to preference for precautionary savings (“animal spirits”). Cole and Ohanian assume the 1929 crash and the bungles of Hoover were the exogenous shock and when Roosevelt took office that shock began to dissipate. Eggertsson, though, assumes the shock persisted through the great depression. Which is the right assumption, nobody knows.

The paper has very nicely separated what we know from what we don’t know. This makes task for future research very clear… what were the nature of those shocks to animal spirits?

To me, this strand of modern macro literature is encouraging. Everyone’s assumptions are laid out on the table, theories are making heavy contact with data and progress can easily be identified. Its almost like this is science.

Multiple multipliers

Gali’s paper is really good. Its the best case for fiscal policy I’ve seen using the modern macro framework. I think the introduction and conclusion are accessible to a general audience and the empirical and modeling sections are good reviews of new Keynesian models and methods for you macro geeks.

Anyway, the model’s innovation is this chart (particularly the increase in consumption due to government expenditure):

From research

The greek symbol lambda represents the percentage of the population that live hand-to-mouth. These are folks that eat everything they earn; they don’t save. The model is agnostic as to why these folks make decisions like this, i.e. limited access to credit markets, myopia, etc. On the vertical access, you see the multipliers implied by each level of hand-to-mouthers. To get multipliers above one — to make government spending worth it — less than 75% of the population needs to be savers. Christie Romer’s multiplier of 1.5 corresponds to having about 40% of the population living paycheck to paycheck.

I’ll leave it to the reader to decide what lambda is reasonable.

What’s the mechanism, you ask?

Rule-of-thumb consumers [wa: i.e. non-savers] partly insulate aggregate demand from the negative wealth effects generated by the higher levels of (current and future) taxes needed to finance the fiscal expansion, while making it more sensitive to current disposable income. Sticky prices make it possible for real wages to increase (or, at least, to decline by a smaller amount) even in the face 29 of a drop in the marginal product of labor, as the price markup may adjust sufficiently downward to absorb the resulting gap. The combined effect of a higher real wage and higher employment raises current labor income and hence stimulates the consumption of rule-of-thumb households. The possible presence of countercyclical wage markups (as in the version of the model with non-competitive labor markets developed above [wa: and represented by the graph above]) provides additional room for a simultaneous increase in consumption and hours and, hence, in the marginal rate of substitution, without requiring a proportional increase in the real wage.

Non-savers don’t save even though they know their taxes will be higher in the future. So when government expenditures increase employment and price stickiness keeps real wages high, the non-savers consume more.

Good stuff.