NOTES

  1. The purpose of this section is to provide a general background against which to view the results of our research. Readers interested in an extended review of the literature may wish to consult Edmonds and Reilly (in press), Keepin et al. (in press), Jarrett and Coates (1984a), Rose et al. (1983), Ausubel and Nordhaus (1983), and/or Perry (1982) for a more detailed assessment.

  2. At first blush these two approaches would seem to come at the problem of future CO2 emissions from exactly opposite directions. The feasibility study begins with a goal and asks whether or not it can be achieved. That is, it begins with a future and works backwards to find the assumptions necessary to generate it. The question then is not will an event happen but can it happen?

    Energy-economic models, and especially the behavioral subset of this genre, work out the consequences of assumptions. Their question is essentially what will happen as opposed to can an event happen.

    But these approaches are not so different as they might appear. Both have in common a detailed structural framework of analysis which incorporates causal factors such as population, economic growth and development, technology, geology, and sociology, in meaningful ways. Furthermore, because of the nature of the analytical process, both spend most of their efforts working from assumptions to conclusions. This is the nature of a behavioral model. But feasibility studies are irrelevant outside of context. They need an underpinning of constraints to reduce the number of free variables to a manageable number. Feasibility studies therefore work within the context of assumptions about available technologies to determine whether or not a constraint, in this case fossil fuel CO2 emissions, can be met.

  3. For example, the models of Perry and Lansberg (1971), and Allen et al. (1981) are single pass analyses. They rely on the researcher's ability to not only identify the key factors and their influences on global energy production and use, but to work out any discrepancies in "gaps" between the two via personal judgment or the assumption of a "backstop" technology which is capable of accounting for any differences. The studies are not computer-based.

    In contrast, Nordhaus (1979) uses a computer-based linear programming model which requires the specification of a great wealth of information about technologies, both those employed and alternative options available. The model is computer-based and employs an optimization criteria to generate an optimum solution. It is capable of being used as both a forecast tool (by using a maximization of global GNP and assuming that the global economy behaves "as if" it optimized this value) or as a feasibility tool (by placing CO2 emissions constraints on the outputs).


  1. (cont'd)

    Haefele (1981) documents a complex set of models including linear programming, accounting, and input-output models. The system is resident on a computer but is extremely complex. The Edmonds-Reilly model (1983) is a behavioral market equilibrium model which balances energy production and use internally. It is discussed in detail in section 3.

  2. These in turn control a total of 649 energy forecast data. See THE DATA BASE and Appendix A for elaboration.

  3. This recategorization represents the most significant change made to the model. The original specification of the model distinguishes three different energy supply categories: resource-constrained conventional energy (conventional oil and natural gas); resource-constrained renewable energy (hydro); and nonconstrained energy resources (unconventional oil, coal, biomass, solar electric, and nuclear).

    The first category contained only resources producible at 1975 prices. A time path of production was specified and prices were allowed to vary so that demand equaled the fixed supply. As prices increased beyond a minimum break-through price, the supply of exhaustible fossil fuels was augmented by unconventional sources of liquids and gases.

    This approach proved to be somewhat cumbersome in some scenarios. Specifically, in very low demand scenarios prices could drop well below 1975 levels; to maintain consistency the user would have to exogenously respecify a lower resource. In addition, the supply of unconventional liquids and gases required a base quantity for each period. This base quantity was exogenously entered. This approach also required user interaction to assure that the producing sector behaved sensibly; i.e., that producers adjusted expansion plans for the next period to reflect the realized rate of production in the current period.

    For a full description of the original model see Edmonds and Reilly (1984). The relationship between the original and current model versions is documented completely in Appendix B.

  4. Conventional statistical analysis of data from which parameters are derived is not possible because of a lack of appropriate data. In addition, the data that is available is necessarily historical whereas the parameters values are assumed to apply in the future. One could, for example, statistically analyze the mean and variance of historical rates of growth of population for 25 year periods over time or, treating countries as individual observations, analyze across countries for a single 25 year period. However, not all regions have an adequate time series on population and the relevance of any such historical analysis to future population growth rates is doubtful.

    A second problem is that any description of uncertainty based on statistical analysis is premised on the adequacy of the structural model and the data set used to make the estimate. In the population example above, we described two possible data sets, time series or


  1. (cont'd)

    cross sectional data. The two will give different estimates of mean and variance, thus exposing structural uncertainty that would not be incorporated in a statistical analysts which uses only one of the data sets. Similarly, estimates of parameters such as the price elasticity of demand are dependent on the specific equation estimated.

    One approach for retaining an objective methodology for estimating uncertainty in inputs is to use statistical analysis of published estimates. Such an exercise suffers from a couple of severe problems. First, published estimates are generally best guess estimates. While a sample of best guess estimates can, under appropriate conditions, yield an unbiased estimate of the mean of a distribution, it will not yield an unbiasd estimate of the variance. A second problem is that different concepts of parameters may exist, some more or less applicable to the specific parameterization in the model.

    For example, price elasticities may be estimated for primary, secondary, or energy service demands (though data is usually not available for estimation of service demands) and estimates may be based on various aggregations of consuming sectors (e.g., the economy as a whole, the industrial sector, SIC industrial categories, etc.). Since price elasticities are, in principle, specific to the category of demand being analyzed, a specific price elasticity estimate may be more or less relevant to the parameter assignment in the model. An even simpler example is that GNP growth protections from published estimates are given for dissimiliar regional aggregations and may be premised on considerably different population and labor force growth estimates; further, they may or may not include feedback from various assumptions about rising energy prices. Some incomparabilities represent legitimate sources of uncertainty in the parameter while others are merely artifacts of definitional differences.

    As a result of the various problems discussed above, the approach chosen for developing the data base of input parameters was to critically review the literature and assign a subjective, but quantitative, description of uncertainty to each parameter. The adequacy of the parameterization of uncertainty is dependent on the understanding of the "quality" of different published estimates for use in the particular model structure and Monte Carlo exercise (i.e., "quality" in the sense of how comparable the published parameters are to the specific model parameterization).

  2. Before actually conducting the Monte Carlo exercise several preliminary exercises were conducted. First, 158 runs were conducted to test the model's robustness. Could the model actually generate results when variables took on extreme values? To do this all input parameters were specified at their median values? One of the 79 Monte Carlo variables was set to its maximum value and to the model run. That variable was then set to its minimum value and the model run. Next, the variable was then reset to its median value and the variable 2 was fixed at its maximum value and the model run. In this way the model was tested for robustness at maximum and minimum values for all 79 input assumptions.


  1. (cont'd)

    Once it was determined that the model could in fact be successfully solved at extreme values for input assumptions, the Monte Carlo computer code, PRISM was tested by varying each of the 79 Monte Carlo variables +/-2 percent. This exercise produced quantitative results on the responsiveness of the model to small derivations from median input values. These were directly comparable to earlier calculations in Edmonds et al. (1984) and provided verification of the successful linkage of the Edmonds-Reilly model fo PRISM.

    Initial attempts to conduct the Monte Carlo analysis resulted in an 8 percent rate of failure to reach solution. Failures were analyzed and corrective measures taken. About half of the failures to reach solution were the consequence of the interaction of extreme assumptions. The model was modified to account for such occurrences. The other half of the failures were idiosyncratic to the solution hmodel's mechanism. That is the model cannot be solved directly. Global energy prices consistent with global energy balance are found through a search procedure. In some cases the search procedure broke down. To address this problem alternative procedures were devised to find solutions in the event of breakdowns. These changes and some additional debugging of the link between the Monte Carlo code, PRISM, and the model, reduced the number of failures to one.

  2. For various reasons several subsets of the 79 variables are assumed to be perfectly correlated; for example population in the OECD and USSR. The extreme assumption of independence between parameters 13 through 18 would imply the possibility of impossible jumps in population over 25 year periods. For example cases could have occurred in which impossibly rapid population growth emerged because a very low population assumption for the year 2050 was followed by a very high population assumption for the year 2075 had we not prevented it. This possibility was removed by assuming that year 2000, 2025, 2050, 2075, and 2100 populations are deterministically linked. If the 75th percentile population is chosen for year 2000, it is also chosen for years 2025, 2050, 2075, and 2100.

    Forty-six Monte Carlo parameters are deterministically linked. Linked variables are listed below:

    Listing of Linked Monte Carlo Parameters by
    Parameter Number and Assumption description

    Assumption Description				    Parameter Numbers
    Population (years 1975-2100; OECD & USSR) 13,14,15,16,17,18 Population (years 1975-2100; China) 15,26,27,28,29,30 Conventional Oil Resource (Grades 1-5; OECD & USSR) 34,35,36,37,38 Conventional Oil Resource (Grades 1-5; LDC's) 39,40,41,42,43 Conventional Oil Resource (Grades 1-5; Mideast) 44,45,46,47,48 Natural Gas Resource (Grades 1-4) 49,50,51,52 Coal Resource (Grades 1-5) 54,55,56,57,58 Unconventional Oil (Grades 1-4) 63,64,65,66