SEDAC
Home
Featured link and image: NASA Watches Arctic Ice, click to see full story

Home Page (MVA) > Integrated Assessment Models (IAMs) and Resources > IAMs Thematic Guide

 

Thematic Guide Icon

Thematic Guide to Integrated Assessment Modeling

[HOME] [PREVIOUS] [NEXT] [BOTTOM]

 

Conclusions

 

This paper is an attempt to bring to the forefront the serious issue of quality control in integrated assessment models. This issue has not been treated adequately, both in the current literature, and in IA conferences and workshops. Articulating and maintaining internal criteria for quality is an essential component of maintaining credibility for all scientific endeavors over the longer run. In this regard, IA stands at a precipice. We believe that its credibility is presently not based on internal criteria of quality; therefore it must be based on other factors. It is possible that this credibility stems from its current political visibility, along with the promise of providing useful results.

While it is difficult to define a set of minimally acceptable standards or procedural assurances for IA, we have identified a list of quality related issues that we hope will serve to draw attention to some of the pitfalls IA studies are falling into:

  • IA practitioners need to be clear about the goals of their research, and bear in mind the distinctions and overlap in defining IA models as heuristic tools and forecasting tools. The process of drawing insights from IA models relies on having some confidence in their ability as truth machines. Yet the process of evaluating the truth dimensions of IA models reduces towards evaluation of insights.
  • IA models draw more heavily from some disciplines than others. While this is not necessarily a problem per se, the asymmetry in disciplinary representation presents challenges for assessment of IA models.
  • Tools, methods, and assumptions from source disciplines form archetypes for components in IA models, and we need to apply greater scrutiny to these archetypes. We also need to be mindful of anchoring on past results, and to articulate motivational biases that might lead to selection of one result over another on the basis of the utility of the result to outside communities.
  • Peer review from outside disciplines is necessary for judging the quality of work imported into IA from outside disciplines. Because of the breadth of IA results, it is also critical to include reviewers from IA peer groups.
  • Due to the lack of metrics and standards available to judge the adequacy of IA studies, we need to develop the ability to discern good IA studies, and identify the factors that make them good.
  • IA results are supposed to be used by outside communities, and their value should be judged at least in part in the outside context. Bearing in mind the use of IA results in outside communities, we can provide the equivalent of `road maps' with results to help outsiders interpret them meaningfully.
  • IA models are infused with many assumptions that have a large bearing on their results. It is important to define the assumptions used in IA models, and where possible, outline their potential implications. It is also important to minimize the number of needless assumptions made out of convenience or ignorance, by maintaining contacts with outside disciplines.
  • Each of the tools and subcomponents in IA models have limitations, and IA practitioners should maintain familiarity with the limitations of their tools. In addition, tools which may be reasonable to use in particular disciplinary, geographical, or temporal contexts may be unsuited to the broader contexts inherent in IA studies. We need to develop more clarity about what IA models can and cannot do.
  • IA practitioners need to be aware that using models biases the selection of tools and concepts toward that which is most easily quantified in a model. Further, when we do try to incorporate `softer' or less readily quantifiable concepts and ideas in models, selective compromises have usually been made in incorporating the original ideas in a model. Given the limitations of the modelling domain, it is also important to pursue non-model IA approaches, free from model-induced biases.
  • Evaluation of IA models is important in establishing credibility. It is a difficult task that has hitherto not been given the priority it deserves. The IA community needs to begin designing experiments and gathering data to compare purported IA model insights with analogous processes in the real world.
  • IA models are often incredibly complex, and it is important to build transparency into the models by paying attention to coding issues such as documentation and modularity. This can help in linking model results and insights to model components and assumptions. We see a need for informal forums for open discussions of problems that arise in IA modelling.
  • IA lacks diversity in funding sources and study regions. This might conceivably contribute to myopia in the formulation of IA studies and interpretation of results. IA research should be broadened to include non-industrialized countries.

This list is by no means complete, but we hope that the list and the paper will serve to generate serious discussion regarding quality control and credibility of IAs. If IA is to be successful in the long run then building and maintaining credibility is a necessary condition for success. We are concerned that if the IA community fails to establish and maintain rigorous quality control measures, IAs may become mere props - clowns that get danced around in the climate change policy circus.

Acknowledgments:


The authors would like to thank Baruch Fischhoff, Granger Morgan, Karen Jenni, Ann Henderson-Sellers, Jan Rotmans, Marjolein van Asselt, Hadi Dowlatabadi, Linda Mearns, Jill Lazenby, Alexia Bloch, Richard Tol, Charles Linville, and Elena Shewliakowa for providing useful comments, suggestions, and for stimulating discussions. Those acknowledged here do not necessarily endorse the ideas in this paper however. The authors maintain sole responsibility for its content.

 

 

 

[SEDAC] [PREVIOUS] [NEXT] [TOP]

 

 

Sources

Parson, E.A. and K. Fisher-Vanden, Searching for Integrated Assessment: A Preliminary Investigation of Methods, Models, and Projects in the Integrated Assessment of Global Climatic Change. Consortium for International Earth Science Information Network (CIESIN). University Center, Mich. 1995.

 

Suggested Citation

Center for International Earth Science Information Network (CIESIN). 1995. Thematic Guide to Integrated Assessment Modeling of Climate Change [online]. Palisades, NY: CIESIN. Available at http://sedac.ciesin.columbia.edu/mva/iamcc.tg/TGHP.html [accessed DATE].

 

 

Our sponsors:

CIESIN - Center for International Earth Science Information Network
NASA - National Aeronautics and Space Administration SEDAC - Socioeconomic Data and Applications Center

Need HELP or information? Contact SEDAC User Services
About SEDAC | Acknowledgments

Copyright © 1997–2012.
The Trustees of Columbia University in the City of New York.
Privacy, Security, Notices