Energy-economic models, and especially the behavioral subset of this genre, work out the consequences of assumptions. Their question is essentially what will happen as opposed to can an event happen.
But these approaches are not so different as they might appear. Both have in common a detailed structural framework of analysis which incorporates causal factors such as population, economic growth and development, technology, geology, and sociology, in meaningful ways. Furthermore, because of the nature of the analytical process, both spend most of their efforts working from assumptions to conclusions. This is the nature of a behavioral model. But feasibility studies are irrelevant outside of context. They need an underpinning of constraints to reduce the number of free variables to a manageable number. Feasibility studies therefore work within the context of assumptions about available technologies to determine whether or not a constraint, in this case fossil fuel CO2 emissions, can be met.
In contrast, Nordhaus (1979) uses a computer-based linear programming model which requires the specification of a great wealth of information about technologies, both those employed and alternative options available. The model is computer-based and employs an optimization criteria to generate an optimum solution. It is capable of being used as both a forecast tool (by using a maximization of global GNP and assuming that the global economy behaves "as if" it optimized this value) or as a feasibility tool (by placing CO2 emissions constraints on the outputs).
Haefele (1981) documents a complex set of models including linear programming, accounting, and input-output models. The system is resident on a computer but is extremely complex. The Edmonds-Reilly model (1983) is a behavioral market equilibrium model which balances energy production and use internally. It is discussed in detail in section 3.
The first category contained only resources producible at 1975 prices. A time path of production was specified and prices were allowed to vary so that demand equaled the fixed supply. As prices increased beyond a minimum break-through price, the supply of exhaustible fossil fuels was augmented by unconventional sources of liquids and gases.
This approach proved to be somewhat cumbersome in some scenarios. Specifically, in very low demand scenarios prices could drop well below 1975 levels; to maintain consistency the user would have to exogenously respecify a lower resource. In addition, the supply of unconventional liquids and gases required a base quantity for each period. This base quantity was exogenously entered. This approach also required user interaction to assure that the producing sector behaved sensibly; i.e., that producers adjusted expansion plans for the next period to reflect the realized rate of production in the current period.
For a full description of the original model see Edmonds and Reilly (1984). The relationship between the original and current model versions is documented completely in Appendix B.
A second problem is that any description of uncertainty based on statistical analysis is premised on the adequacy of the structural model and the data set used to make the estimate. In the population example above, we described two possible data sets, time series or
cross sectional data. The two will give different estimates of mean and variance, thus exposing structural uncertainty that would not be incorporated in a statistical analysts which uses only one of the data sets. Similarly, estimates of parameters such as the price elasticity of demand are dependent on the specific equation estimated.
One approach for retaining an objective methodology for estimating uncertainty in inputs is to use statistical analysis of published estimates. Such an exercise suffers from a couple of severe problems. First, published estimates are generally best guess estimates. While a sample of best guess estimates can, under appropriate conditions, yield an unbiased estimate of the mean of a distribution, it will not yield an unbiasd estimate of the variance. A second problem is that different concepts of parameters may exist, some more or less applicable to the specific parameterization in the model.
For example, price elasticities may be estimated for primary, secondary, or energy service demands (though data is usually not available for estimation of service demands) and estimates may be based on various aggregations of consuming sectors (e.g., the economy as a whole, the industrial sector, SIC industrial categories, etc.). Since price elasticities are, in principle, specific to the category of demand being analyzed, a specific price elasticity estimate may be more or less relevant to the parameter assignment in the model. An even simpler example is that GNP growth protections from published estimates are given for dissimiliar regional aggregations and may be premised on considerably different population and labor force growth estimates; further, they may or may not include feedback from various assumptions about rising energy prices. Some incomparabilities represent legitimate sources of uncertainty in the parameter while others are merely artifacts of definitional differences.
As a result of the various problems discussed above, the approach chosen for developing the data base of input parameters was to critically review the literature and assign a subjective, but quantitative, description of uncertainty to each parameter. The adequacy of the parameterization of uncertainty is dependent on the understanding of the "quality" of different published estimates for use in the particular model structure and Monte Carlo exercise (i.e., "quality" in the sense of how comparable the published parameters are to the specific model parameterization).
Once it was determined that the model could in fact be successfully solved at extreme values for input assumptions, the Monte Carlo computer code, PRISM was tested by varying each of the 79 Monte Carlo variables +/-2 percent. This exercise produced quantitative results on the responsiveness of the model to small derivations from median input values. These were directly comparable to earlier calculations in Edmonds et al. (1984) and provided verification of the successful linkage of the Edmonds-Reilly model fo PRISM.
Initial attempts to conduct the Monte Carlo analysis resulted in an 8 percent rate of failure to reach solution. Failures were analyzed and corrective measures taken. About half of the failures to reach solution were the consequence of the interaction of extreme assumptions. The model was modified to account for such occurrences. The other half of the failures were idiosyncratic to the solution hmodel's mechanism. That is the model cannot be solved directly. Global energy prices consistent with global energy balance are found through a search procedure. In some cases the search procedure broke down. To address this problem alternative procedures were devised to find solutions in the event of breakdowns. These changes and some additional debugging of the link between the Monte Carlo code, PRISM, and the model, reduced the number of failures to one.
Forty-six Monte Carlo parameters are deterministically linked. Linked variables are listed below:
Listing of Linked Monte Carlo Parameters by
Parameter Number and Assumption description
Assumption Description Parameter Numbers
Population (years 1975-2100; OECD & USSR) 13,14,15,16,17,18 Population (years 1975-2100; China) 15,26,27,28,29,30 Conventional Oil Resource (Grades 1-5; OECD & USSR) 34,35,36,37,38 Conventional Oil Resource (Grades 1-5; LDC's) 39,40,41,42,43 Conventional Oil Resource (Grades 1-5; Mideast) 44,45,46,47,48 Natural Gas Resource (Grades 1-4) 49,50,51,52 Coal Resource (Grades 1-5) 54,55,56,57,58 Unconventional Oil (Grades 1-4) 63,64,65,66