A new paper titled “Normalizing economic loss from natural disasters: A global analysis” by Neumayer and Barthel (NM2010) in the top journal Global Environmental Change introduces a new method to normalize disaster-related economic losses over time. In the past there have been a few papers dealing with the issue that we see rising disaster losses, yet also increasing wealth, so the question is whether more disasters or more fancy beach houses are ultimately behind the upward damage figures. NM2010 state in their abstract (emphasis added):
In this article, we argue that the conventional methodology for normalizing economic loss is problematic since it normalizes for changes in wealth over time, but fails to normalize for differences in wealth across space at any given point of time. We introduce an alternative methodology that overcomes this problem in theory, but faces many more problems in its empirical application. Applying, therefore, both methods to the most comprehensive existing global dataset of natural disaster loss, in general we find no significant upward trends in normalized disaster damage over the period 1980–2009 globally, regionally, for specific disasters or for specific disasters in specific regions.
I have myself thought about this issue in German at Die Klimakrise here and here, and I can’t help to notice that this paper, just as several others before, fail on one crucial account: They do not look into mitigation, i.e. defensive measures against disasters (I don’t really know why this isn’t called adaptation, maybe it’s meant as mitigating e.g. storm damages?). In a word, these papers treat present-day disaster preparedness as if it was the same a few decades or even a century ago. And that’s simply not the case.
Now don’t get me wrong: Scientifically, this is perfectly fine. It is stated in the papers that mitigation measures are not evalued, and that the results should therefore be taken with caution. However, since we have seen a lot of criticism against some climate scientists allegedly not communicating clearly enough about, say, uncertainties and inconclusive research results, one has to wonder how these papers are being discussed in the blogosphere. Rabett Run has had some interesting questions about this back in February 2010 following my own observations, so it’s worth looking again.
These damage-figure-normalization-papers assume a house built in the 1980s, or the 1950s, or the 1920 (depending on how far back in time they look) has the same construction standard and therefore resilience against (let’s stick with the hurricane example for a while) stormy winds than a house built in the 2000s. Also, they assume that all the flood management programs introduced throughout the 20th century have virtually no effect whatsoever, and that all the hurricane observation technology in place and the hurricane watch/warning schemes set up have zero effect.
One might even be able to argue that this actually is the case, and that NOAAs National Hurricane Center along with everything else done to prevent material losses has a net economic benefit of zero. However, none of the papers does. So all the results from these normalizing losses exercises take into account anything that would lead to an artificial increase in loss figures (i.e. more people building more expensive houses near vulnerable areas), for which the gross figures are then corrected, yet they leave out anything that would virtually lead to a reduction of these figures over time.
Other papers concerned with normalizing disaster-related damages suffer from the same problem. For example, Pielke et al. 2008 state in “Normalized Hurricane Damage in the United States: 1900–2005” (emphasis added):
This paper normalizes mainland U.S. hurricane damage from 1900–2005 to 2005 values using two methodologies. A normalization provides an estimate of the damage that would occur if storms from the past made landfall under another year’s societal conditions. Our methods use changes in inflation and wealth at the national level and changes in population and housing units at the coastal county level. Across both normalization methods, there is no remaining trend of increasing absolute damage in the data set, which follows the lack of trends in landfall frequency or intensity observed over the twentieth century.
However, further down the pages we find:
Another important factor is mitigation and the implementation of stronger building codes. There is considerable evidence that strong building codes can significantly reduce losses; for example, data presented to the Florida Legislature during a debate over building codes in 2001 indicated that strong codes could reduce losses by over 40% (IntraRisk 2002). As strong codes have only been implemented in recent years (and in some cases vary significantly on a county-by-county basis), their effect on overall losses is unlikely to be large, but in future years efforts to improve building practices and encourage retrofit of existing structures could have a large impact on losses.
Of course, next to building codes there’s also building practice, there’s flood management, there’s hurricane preparedness and a lot of other things that changed over time. However, they do not even find mentioning in Pielke et al. 2008, let alone are they being incorporated into the analysis. Likewise, there is Schmidt et al. 2009. In “The impact of socio-economics and climate change on tropical cyclone losses in the USA” it is stated:
The findings show the increase in losses due to socio-economic changes to have been approximately three times greater than that due to climate-induced changes.
And again, no analysis of any protective measures against disaster damages. I remember having a brief e-mail conversation with Roger Pielke Jr. a while ago about this (and now I remember that it was my turn to continue the dialogue). His latest post on this has brought the issue back to my mind. Roger comments on NM2010 and writes:
The paper finds no evidence of upward trends in the normalized data.
Which is only half the truth, since the abstract of NM2010 contains (emphasis added):
Due to our inability to control for defensive mitigation measures, one cannot infer from our analysis that there have definitely not been more frequent and/or more intensive weather-related natural hazards over the study period already. Moreover, it may still be far too early to detect a trend if human-induced climate change has only just started and will gain momentum over time.
So if you’re a true honest broker, what should you do with all these papers? If one does not look into defensive/mitigation measures, one simply shouldn’t state (like Roger does) that there`s no evidence for an upward trend in disaster losses and leave it like that. In fact, if a normalization method finds no significant upward trend, one could easily interpret this as quite disturbing. Because it either means that all our adaptive measures against disasters have had zero effect so far (quite a startling conclusion for the folks who fancy adaptation against climate change), or it means that the hypothetical reduction in disaster losses one would expect over time has been eaten up by actually increasing storm damages.
Update: A meta study by Bouwens (2010) is also interesting in this regard. In “Have disaster losses increased due to anthropogenic climate change?”, Bouwens summarizes the findings of 22 recent quantitative studies and comes to the conclusion (emphasis added):
Analyses show that although economic losses from weather related hazards have increased, anthropogenic climate change so far did not have a significant impact on losses from natural disasters. The observed loss increase is caused primarily by increasing exposure and value of capital at risk.
It’s as clear as that. With regards to protection against disasters, however, he notes:
When society becomes wealthier and more exposed, investments are more likely to be made, in order to prevent and protect against natural hazards. Normalization studies often fail to correct for measures that reduce vulnerability as they are harder to quantify than changes in exposure. Properly set-up studies would need to include aspects of the hazard (geophysical data), exposure (population and wealth), as well as changes in vulnerability. Some studies do take into account changing vulnerabilities. For instance the normalization study by Crompton and McAneney (2008) corrected over time for increasing resilience of buildings to high wind speeds. A rigorous check on the potential introduction of bias from a failure to consider vulnerability reduction in normalization methods is to compare trends in geophysical variables with those in the normalized data. Normalized hurricane losses for instance match with variability in hurricane landfalls (Pielke et al. 2008). If vulnerability reduction would have resulted in a bias, it would show itself as a divergence between the geophysical and normalized loss data. In this case, the effects of vulnerability reduction apparently are not so large as to introduce a bias.
The latter point has also been made by Roger Pielke Jr. in our brief email exchange. It certainly is a valid one, though I still have my doubts. Try turning it around: Would the match between landfalling hurricanes and normalized losses be proof enough for discrediting NOAAs National Hurricane Center als useless for preventing material damages? I don’t think so. What’s surprising to me is that instead of publishing the 23rd normalization study, apparently no one (at least not that I’m aware of, but I’m not really familiar with this field) tries to actually factor in resilience changes.
Also, it is rather strange to see that the number of fatalities goes down with the introduction of hurricane surveillance (and for that matter, according to a 2000 study, it’s the same with tornadoes). However, while saving lives, the whole effort is supposed to not have significantly prevented material damages.