SEDAC
Home
Featured link and image: NASA Watches Arctic Ice, click to see full story

Home Page (MVA) > Integrated Assessment Models (IAMs) and Resources > IAMs Thematic Guide

 

Thematic Guide Icon

Thematic Guide to Integrated Assessment Modeling

[HOME] [PREVIOUS] [NEXT] [BOTTOM]

 

Social Costs of Climate Change

The second example we provide was mired in political debate in the Intergovernmental Panel on Climate Change (IPCC) process. Chapter 6 of the IPCC's Working Group III titled ``Social Costs of Climate Change: Greenhouse Damages and Benefits of Control'' stands out as a major failure of the IA community at large in that this chapter with its numerous problems and possibly fatal flaws was able to pass muster through several stages of peer review. The extensively peer reviewed version of the chapter was rejected at a meeting in Geneva only due to political intervention, and the authors were asked to go back and reconsider their work. Subsequently, the key elements of the chapter were not included in the policymaker's summary (Masood and Ochert, 1995). The ongoing controversy is further reflected in Masood (1995) and Meyer et al. (1995).

IPCC reports summarize the state of the art understanding of all elements of the climate problem. From a policy viewpoint they represent the most important input that the scientific community provides to the climate change negotiation process. Chapter 6 was originally conceptualized to provide an overview of current knowledge regarding greenhouse damages (Pearce et al., 1995). Although not explicitly an integrated modeling task, it turned into an effort to integrate knowledge about the impacts of climate change by researchers actively involved in IA.

It was clear from the beginning that the results presented were controversial. This was evidenced in meetings where the writing team presented their work for review[FN], and more informally through bulletin boards on the Internet[FN]. Although we believe that there were a large number of problems with the synthesis, a detailed criticism is beyond the scope of this paper. Here we focus on two issues that we think warrant careful consideration and may shed light on some systematic pitfalls of integrated assessments. The first and more controversial aspect of the work was the assigning of a ``value of statistical life'' to human mortality world wide in U.S. dollars. Specifically, the writing team chose to assign a ``value'' of $1.5 million to an industrialized life and $150,000 to a non-industrialized life. These numbers were then used to convert mortality estimates into damages costs and combined with economic damage estimates from other sectors to arrive at aggregate values for global greenhouse damages.

Value of life is a well debated notion in the technological risk assessment literature (Graham and Vaupel, 1980). Since the abolition of slavery in most parts of the world, human life is not traded in markets - free or otherwise. The idea of ascribing a monetary value to human life is purely a tool of policy analysis. It is a measure for how much a society is willing to invest in reducing mortality. It can also be used as a rule for cost-effectiveness of programs/regulations in reducing human mortality; as such it is just one rule that societies use as input in making risk management decisions (Morgan, 1993). As Fischhoff et al. (1990) point out, people use more than just expected mortality to rank risks. They are also concerned about equitable distribution of risks, individual control over risk mitigation, whether the risks are voluntary or involuntary in nature, as well as about the understanding of and level of uncertainty in the underlying science.

The fact that a specific decision regarding risk depends on the context is evident in the large differences in value of life numbers derived for different types of exposures. For example, programs in the U.S. spend between $3600 to $10 million for reducing expected mortality by one (Graham and Vaupel, 1980). It is therefore meaningless to assign a single value of human life outside the context of a specific decision regarding risk. It makes even less sense to us to add the mortality weighted value of life estimates with losses of other commodities that are bought and sold on the market.

In addition to this, the chapter made another assumption that ignited the tinderbox. The working group assigned different values of life to countries with different per capita incomes; we will leave the ethical implications of such an assumption as an exercise for the reader. As far as their analysis was concerned the assumption led to a perverse result - countries less vulnerable to climate change and more capable of adapting to them would face more damages than other more vulnerable countries. The ``value of life'' numbers chosen in the report are arbitrary and reflect the ideology of the authors. This was evident in the controversy that the report generated early on. Nothing is likely to discredit IA faster than the production of sloppy studies that end up generating needless controversy.

A second set of results in the chapter generated less controversy, but were no more defensible. This was a table of dollar values for environmental entities (e.g. animals such as Golden Eagles and Grizzly Bears) extracted from Contingent Valuation (CV) studies, perhaps, to give the readers a sense of how non-market goods can be monetized. There was little mention of the problems with CV or the large body of behavioral psychology literature that points to the numerous conceptual and operational flaws in the methodology (Diamond and Hausman, 1994; Fischhoff, 1991). The overall effect of the assumptions made by the IPCC working group was truly surreal: a set of arbitrary dollar figures were passed off in peer review as state of the art social science in an international consensus document.

The lessons from this episode are clear:

  • Integrated assessment researchers need to temper their own disciplinary bias with openness, particularly when research in other areas provides a perspective different from their own.
  • Tools can be heavily misused, as the example of the use of value of life estimates and CV results demostrates. Knowing the limits of the tools being used in an analysis is just as important as knowing how to use them. As Morgan (personal communication) notes, there are many standard tools of quantitative policy analysis which make sense in the limited contexts (decadal time constants, one society, etc.) for which they have been developed and widely used. When analysts use them in IA applications they need to be aware that the IA context may be so broad as to violate some of the assumptions from the context in which the tools were developed.
  • It is critical to recognize that assessments are not only for the benefit of other researchers, they feed into a larger policy process where policy makers, non-governmental organizations (NGOs), lay people and industry participate.

Each of these observations have some serious implications for the art and practice of IA. These will be explored in further detail in section 5.

 

 

 

[SEDAC] [PREVIOUS] [NEXT] [TOP]
 

 

Sources

Parson, E.A. and K. Fisher-Vanden, Searching for Integrated Assessment: A Preliminary Investigation of Methods, Models, and Projects in the Integrated Assessment of Global Climatic Change. Consortium for International Earth Science Information Network (CIESIN). University Center, Mich. 1995.

 

Suggested Citation

Center for International Earth Science Information Network (CIESIN). 1995. Thematic Guide to Integrated Assessment Modeling of Climate Change [online]. Palisades, NY: CIESIN. Available at http://sedac.ciesin.columbia.edu/mva/iamcc.tg/TGHP.html [accessed DATE].

 

 

Our sponsors:

TM0=o`$JB [Fj P!+t;r& L> e"e7޳lv*1VLa T$D&F&-.nKO)Jq* j|^!|12b(l4^Бaa}KMlXȔP\3)HK x׸"k`TC '\/<3tf,CnLI}Ȏce!HSZBFX&󝝙T)qgg֊+{ah.hr ^ h8gO`w_NW;VX1s6t2 e) Fa릭**@B"V*kh $