Truth

esr speaks it:

All data, including primary un-”corrected” datasets, must be available for auditing by third parties. All modeling code must be published. The assumptions made in data reduction and smoothing must be an explicitly documented part of the work product.

These requirements would kill off AGW alarmism as surely as a bullet through the head.

Transparency would kill off AGW denialism, too.

What is a catastrophe?

In the previous post, I mentioned two types of catastrophe: cliff-diving and gradual. Suppose the first is a sudden major decrease in output. How much would returns to capital have to decline to get 1% average yearly returns over a century (given 6% “usual” returns)? This is the solution to this problem:

If I got my sums right, this implies about a 99% reduction in returns to capital in the year of the catastrophe.

Now, suppose the catastrophe plays out over a decade. How negative would growth in capital productivity have to be to get an average 1% returns for a century? Doing a similar calculation, returns to capital would have to be about -35% every year for a decade to get average returns that low over the century.

These examples make the catastrophes of the 20th century pale in comparison. In the US in the 1930s, GDP growth was, on average, positive. Austria was the biggest loser during WWII, in terms of GDP, and it only decreased by about 10% a year from 1938 to 1945.

In Weitzman’s calibration, there’s a 1 in 200 chance of having 1% yearly returns over the next century. These examples make this calibration look rather unrealistic.

But innovation is endogenous, too

Risk aversion usually makes investments look less attractive. In the standard story, then, as risk aversion goes up, we would spend less money today to avert future catastrophes. Martin Weitzman argues, however, this relationship reverses when we’re uncertain about future productivity levels. For if it turns out that productivity is low in the future (e.g. because of a climate catastrophe) then consumption will be low. Low consumption means high marginal utility and so low discount rates.

My purpose here is to focus sharply on clarifying the long-run discounting issue by using a
super-simple super-crisp formulation… There is no good substitute for seeing clearly before one’s eyes the basic structure of a model laid bare.

Suppose [there’s uncertainty about discount rates in the future and] that discount rate ri > 0 will occur with “probability-like weight” wi > 0, where sum wi = 1… The [expected] discount rates … decline over time starting from their average value and going down over time to approach their lowest possible value. Over time, the impact of the higher discount rates … diminishes because the higher rates effectively discount themselves exponentially out of existence, leaving the …field to the lower discount rates (and, eventually, to the lowest).

[A]ny given value of [future productivity level] subsequently determines the endogenous future growth rate … and endogenous consumption level … as the solution to a Ramsey optimal growth problem (given that value of [productivity]). The paper shows that when future productivity … is uncertain, then higher values of [risk aversion] are associated with lower future discount rates, thereby reversing the conventional wisdom.

[An example using standard parameter values] indicate[s] that the risk-aversion effects of uncertain future productivity on lowering distant-future discount rates might be quite powerful. The driving force is a “”fear factor” ”associated with the possibility of low-probability but catastrophically-high permanent damages to future productivity.

I personally would be inclined toward a much lower climate-change discount rate than 6% per annum, but the ultimate goal of this paper … will be to show that under uncertainty, even with expected discount rates as high as 6%, the “effective ”discount rate, which “ought” to be used, can be much lower than 6%.

The results in the example he gives depend very much on the distribution of future productivity, especially on its variance. Here’s his assumed distribution:
weitz

In his model, this distribution should be interpreted as the distribution of the average productivity over a very long time (i.e. the next century or so). To me, his assumed uncertainty about productivity is too high (i.e. the variance of the distribution is too wide). Is it even remotely possible that the capital-output ratio will average 50 or 100 for the next century? Even if there’s a sudden, cliff diving catastrophe, I can’t imagine people won’t invent their way around it thus increasing the productivity of capital in the long run. And if catastrophe happens in slow motion, certainly people will be adjust fast enough to keep returns on capital high on average.

The most striking stylized fact about growth is its constancy. For whatever reason, people route around the particular circumstances of their time and space and invent their way to 2% growth. Why would climate change pose fundamentally different obstacles to innovation than what has been seen over the last 200 years? Do the open systems studied in Diamond’s Collapse tell us anything about the closed global system?

Cool down, cool down

Even if The Emails discredit all the “science” on tree-rings, there’s still ice cores and bore hole data. And even if all paleoclimatology is BS, thermometer readings of temperatures show a sharp spike in the last couple of decades.

One piece of evidence for AGW has, perhaps!, been discredited. Not all evidence has been discredited. We still know C02 is a green house gas, humans have been pumping tons of it into the air and there’s the direct evidence of warming. There is no reason at all to think the AGW theory is a “scam” or a “fraud”.

Other people’s emails

I agree with McArdle who agrees with Cowen and Hanson. There appears to be consensus in the literature…

Lesson one: scientists are people, people are jerks and that Mann guys seems to be an especially big person.

Lesson two: its way too fun reading other people’s email.

Lesson three (the real lesson): data and methods should be freely available. Reading all the nonsense that goes on trying to replicate policy-important studies (like the Mann et al 1999 paper) is infuriating. Luckily, in my experience, we don’t have this problem in economics.

Contrary to McArdle and Cowen, my prior on the joint claim “the last century was the hottest of the millennium” or “tree ring data is a good proxy for temperature” has decreased reading some of the replication studies. While thermometer readings show a pronounced increase in temperatures in recent decades, the tree ring data probably doesn’t. There’s even a good chance the tree ring data, if a good temperature proxy, show several periods in the past 1000 years that were warmer than the last century.

Here’s a paper in Science that uses a different data set from Mann paper and shows the so-called “Medieval Warm Period” probably had temperatures at least as high as the 20th century.

I haven’t seen a good explanation for why thermometer readings show different trends than tree rings (or other temperature proxies). Of course, I trust thermometers over proxies, but still.

How much is a “big change” in the climate?

Here’s the density of temperature changes over centuries. I used these data and calculate the average change in temperature per century.
ice_core_temp_changes
Last century’s temperature increase of 0.8 degrees C was an outlier (but not an extreme outlier). About 95% of temperature changes were slower than last century’s temperature change. If the climate models are correct and the world sees a 2.5 degree increase, this would be an extreme outlier. Only about 1 in a three or four hundred centuries sees that dramatic of temperature changes.

This, of course, doesn’t guarantee catastrophe, but it suggests we should at least insure ourselves against the possibility of catastrophe.

Mass outbreak of illiteracy?

I just read The Chapter Five (in this book) and I’ll go as far as to say the critics don’t even have a point.

The chapter motivates geo-engineering without saying anything outlandish about the science (as far as this semi-informed lay-person can tell). There’s not even as much as an implication in the whole chapter that global warming isn’t a big deal. In fact, such a level-headed discussion of solutions to the problem actually bumped up my subjective probability of global warming being an issue to worry about. There’s no use fretting over something we can’t, practically, do anything about. It appears more likely to me, now, that we can do something about global warming.

The authors haven’t given up on marginal analysis, they seem to get that people respond to incentives and they appear to be aware of Pigovian-type taxes. They explicitly address the problems with carbon pricing and move the discussion forward with their description of one alternative policy (i.e. subsidized research in bio-engineering).

The reaction to this chapter was just plain strange.

Does this argument really work?

Tim Lambert, responding to this opinion piece ((why in the deuce is science being argued in the opinion pages!)), says:

If the hot spot really is missing is does not prove that CO2 is not causing warming, but it would indicate something wrong with the models. (Which might mean that things are worse than what the models predict.)

That parenthetical kills me. Somebody should try it in a macro seminar. “No my model doesn’t show the characteristic hump shaped inflation response, but this means my model is wrong. The correct model might make my conclusion even stronger!”

But what really pisses me off about the piece is its title. Yeah, yeah, I know its a blog post, but its a blog post at “Scienceblogs” which I would expect to be more scientific-y or something. Anyway, how is a facts-driven, plausible sounding critical review a “war on science”?

In the backwoods where I’m from, facts-driven plausible sounding stories are science.