Home Page (MVA) > Integrated Assessment Models (IAMs) and Resources > IAMs Thematic Guide
Thematic Guide to Integrated Assessment Modeling
[HOME] [PREVIOUS] [NEXT] [BOTTOM]
Evaluation
The effort to compare models and model subcomponents or processes against observational data and/or qualitative understanding is often referred to as ``validation''. Following Oreskes et al. (1994) who point out that confirmation of models of natural systems is inherently partial due to the fact that natural systems are not closed systems, we use the more modest term ``evaluation'' to describe this process. Evaluation is an essential part of quality control in model development, use, and interpretation, despite its inherent limitations. The exercise of evaluating models is necessary in providing feedback on the plausibility of the models, and in gaining credibility for the models in outside communities. When outside users assess ``value'' of model results, they want to be sure that the internal criteria for ``adequacy'' are met.
Because of the dual nature of IA models as heuristic tools with forecasting elements, it makes some sense to think about whether the heuristic and forecasting functions have different requirements in an evaluation context. Our discussion here will thus proceed along separate lines for the different functions, though the blurriness of this distinction in IA models leads to a similar blurring in efforts to evaluate the models in one or the other role.