Operations research (OR) generally centers on building a mathematical model of an operational system. In contrast, the branch of OR called decision analysis (DA) centers on building a mathematical model of the preference structure of a decision maker. The goal of DA is to reveal to him or her what decision should be made by eliciting details about what he or she wants to accomplish in a complex situation, perhaps one with many uncertainties and many competing priorities. The purpose of this chapter is to outline how DA fits into military operations research as a whole, and then to present some standard best practices that will cover multiattribute decisions, one of the most common types of military and defense problems that calls for DA.
Operations research (OR) generally centers on building a mathematical model of an operational system. In contrast, the branch of OR called decision analysis (DA) centers on building a mathematical model of the preference structure of a decision maker. The goal of DA is to reveal to him or her what decision should be made by eliciting details about what he or she wants to accomplish in a complex situation, perhaps one with many uncertainties and many competing priorities. The purpose of this chapter is to outline how DA fits into military operations research as a whole, and then to present some standard best practices that will cover multiattribute decisions, one of the most common types of military and defense problems that calls for DA.
There are many reasons why decisions can be perplexing enough to require formal analysis, such as poor problem definition, unclear alternatives, uncertain outcomes, and linkages over time. This chapter focuses on multiattribute decision analysis (MADA, sometimes called MODA (multi-objective decision analysis) or MCDM (multi-criterion decision making)). MADA addresses problems that perplex because of multiple competing objectives, so that not all can be fully satisfied and some tradeoff must be made. The origins of MADA are traced out in section 2-2 of Zeleny (1982), from its beginning with inquiries in economics and mathematics in the 1940s and 1950s through the development in the 1960s and 1970s of a recognized field of application. The classic text by Keeney and Raiffa (1976, updated 1993) can perhaps be regarded as the culmination of the early development of the field, and their methods are not fundamentally different from those presented in this chapter. Von Winterfeldt and Edwards (1986) include an extensive discussion of multiattribute problems with special emphasis on behavioral aspects in addition to modeling. Watson and Buede (1987) include good material on multiattribute problems in their text on decision analysis. Kirkwood (1997) provides a classic text specifically on MADA. A recent mathematical treatment of the methodology is in Chapters 26–28 of Howard and Abbas (2016).
Professional decision analysts like to imagine themselves working for senior decision makers stumped by multi-billion-dollar life-or-death problems. People like that usually imagine themselves to be pretty good decision makers on their own. Usually they are right – they wouldn’t have obtained the position without a record of good decision making, particularly in the military. And yet decision analysts do get hired, at least occasionally. Why is this? The US Army teaches a “Military Decision Making Process” or MDMP (United States, 2014, Chapter 9). This is “an iterative planning methodology to understand the situation and mission, develop a course of action, and produce an operation plan or order.” The methodology uses the ideas of alternatives, decision criteria, and weighted measures in a way not fundamentally different from MADA. However, it puts more emphasis on thorough understanding of the problem and on careful and thorough planning and coordination than on actual decision making. It is more rough-and-ready and less careful about technical correctness than a decision analyst might like. This emphasis is appropriate, because in military operations it is more important to make a reasonable decision quickly and carry it out well and vigorously than to make the best possible decision. But there are other non-operational situations where there is enough time to look at the problem carefully. Perhaps there is a lot at stake in the decision and the best course of action is not obvious to all. This can be the case in system acquisition, base location, and strategy selection, for example. These situations call for a different method of decision making. A technically sound, defensible, and transparent decision making process can be of great help, especially when the decision must be publicly defended in front of stakeholders who may not be happy with the result. This is where the methods of formal decision analysis are most useful. Furthermore, knowledge of DA techniques can improve decision making even when the formal methods are not applied. They accustom one to clear and systematic thought when perplexed by a decision problem. The methods of DA are appropriate when the decision maker feels they are needed – and perhaps when an advisor to a decision maker feels that there is danger of making a wrong turn, and that some objective quantitative support will improve the chances of deciding correctly.
Howard and Abbas (2016) describe six linked elements in a good decision:
All six of these are important, maybe even equally important, but this chapter focuses on right reasoning, the model-building part of decision analysis that is most closely aligned with operations research as a field. Frame, alternatives, values, information, and DM will generally be assumed given for our purposes, even though in professional practice it is often difficult to pin down one or more of these. For an extended account of all these aspects of professional decision analysis, one could do far worse than consult the Handbook of Decision Analysis by Parnell et al. (2013), which is the product of a great deal of experience in the craft.
Since its goal is to improve decisions, the approach of this chapter is often called prescriptive decision analysis, i.e., describing how people ought to make decisions. It is distinguished from descriptive decision analysis, which describes how people really do make decisions. Needless to say, the two are often not the same. Nevertheless, an analyst applying DA in the military domain should understand the various biases and psychological traps that bedevil human decision making. So should anyone else. These logical errors are very easy to see in others once you understand them, and always very difficult to see in oneself. A very accessible account of descriptive DA has been provided by Kahneman (2011).
The importance of DA in defense applications has developed over the past 30 years. Corner and Kirkwood (1991) surveyed the OR literature up to 1989 and found no published defense applications of DA at all. Keeney and Raiffa (1993) included no military applications either. However, a later survey (Keefer, Kirkwood & Corner, 2004) found 13 between 1990 and 2001, amounting to 15% of all published applications. The very first one (Buede & Bresnick, 1992) used MADA.
An idea of the current prevalence and importance of decision analysis within military operations research can be had from the following statistic: of 338 technical articles in the journal Military Operations Research (MOR) for which keywords could be found, 75 of them (22%) included Decision Analysis or a synonym among them. (This covers 1997 through 2018.) A notion of the breadth and scope of the problems addressed by DA can be had from the following: in the most recent five years (mid-2013–mid-2018), 13 DA articles in MOR advanced the state of the art in the following areas:
Note that six of these 13 articles (Schroden et al.; Reed et al.; Colombi et al.; Miner et al.; Coles and Zhuang; Wall and LaCivita) used multiattribute decision methods. Burk and Parnell (2011) surveyed 24 published military applications of portfolio decision analysis between 1992 and 2010; 18 of them (75%) used multiattribute DA.
The following section develops a technically sound yet reasonably easy to apply method for MADA modeling under certainty, from establishing one’s value structure through calculating multiattribute value, along with methods of handling cost vs. value and of dealing with problem stakeholders. The section after that deals more briefly with MADA under uncertainty, in which the result of the decision will be in part the result of chance. Following that, a section discusses sensitivity analysis, an indispensable step for the analyst to show the robustness of his or her result, and another briefly discusses available commercial software for DA applications. The chapter concludes with a real-world example of the application of these methods.
Those who are interested in an applications-oriented text on these topics may find Clemen and Reilly (2001) helpful, particularly Chapters 15 and 16 for MADA. Each chapter also has a good list of references at the end.
In business, most decisions come down to a single criterion: money (to include proxies for future streams of money, such as market share). This is appropriate because businesses exist primarily for the one purpose of making money. In the public sector, including the military, it is not so simple. There are multiple stakeholders with many different priorities and multiple incommensurable objectives. Money is certainly always an issue, but so is readiness for different kinds of conflicts, troop welfare, training, development of future capabilities, and sometimes national civil priorities like equal opportunity. It is common for a decision to be perplexing because there is no obvious alternative that is strong in all areas, so that some difficult tradeoff must be made. To weigh these incommensurables, the most straightforward technically sound approach is a value model (VM). This requires the following things: a value hierarchy, measures for each objective, ranges of variation for each measure, value functions to translate measure scores to single-attribute value, and relative weights to relate the different criteria to each other. By using this process, analyzing a difficult multiattribute decision can be broken down into a series of easily understood and accomplished steps.
This section starts with a discussion of the need to thoroughly understand what the DM is trying to achieve, recommending an approach called Value-Focused Thinking. Next is an account of how to gather all relevant information together in a consequence table. With these preliminaries accomplished, the following subsection presents the technical details of a value model that can calculate a best alternative, using a notional military example. Subsequent subsections discuss the special role of cost and the particular problems when one has multiple decision makers.
In a sense, decision analysis always starts with alternatives, since we have to be aware of at least two alternatives before understanding that we are in a decision situation at all. Nevertheless, Keeney (1992, 2008) has given us the insight that in a multiattribute problem, the first order of business is to thoroughly understand one’s values, not the alternatives. After all, he reasons, our values are why we are concerned with the decision at all, why we would rather have one outcome rather than another. There may be many problems in which the values are clear and agreed to by all, but those are not generally the ones that get referred for careful analysis. If we have a thorough understanding of what we value in a decision, and make our estimation of value more precise by defining objectives and specifying desired outcomes, using engineering units if possible, we are more likely to make a good choice. A special issue of Military Operations Research (Vol. 13, no. 2, 2008) was devoted to Value-Focused Thinking, showing its fundamental importance to MADA.
Another reason to focus first on values is that they will actually help us develop better alternatives. For each objective, consider what type of alternative is likely to perform especially well in that particular area. This will result in a variety of possible alternatives that are different from each other and perform well, at least in one area. Remember that your outcome can be no better than the best alternative that you consider. In order to get better outcomes, it makes sense to develop alternatives that perform well in areas that you value.
The values, objectives, and measures for a decision problem do not come from the analyst. They have to be elicited from the DM and from other stakeholders in the problem. This is not always easy to do, especially when there are multiple stakeholders with different ideas of what is valuable. However, experience shows that it is not always as difficult as one might expect. Stakeholders rarely value opposite things; more commonly, they value the same things, but put them in different orders. It will help if they can be encouraged to move away from thinking about alternatives, where they often already have identified favorites, and toward thinking only of what performance outcomes would be of value.
For purposes of clear communication, it is often a good idea to organize the objectives into a value hierarchy, with related objectives grouped together. Figure 3.1 provides a realistic example of how this might be done for a decision about procuring an imagery surveillance system. There are three main objectives that provide value to the DM. One of the objectives is further broken down into subobjectives. Each subobjective has one or more measures that will be used to assess performance. There is a consistent numbering scheme for clear identification. Collectively, performance on the measures should tell the DM everything he or she needs to know to identify the best alternative – they should be collectively exhaustive. Care should also be taken that no two measures actually measure the same aspect of performance – they should be mutually exclusive. These ideals are relatively easy to approach, but hard to hit exactly, especially for a complex system. Some complex decision problems require one hundred or more separate measures to satisfy each stakeholder that their needs are represented (Burk et al., 2002; Parnell et al., 2004).
Figure 3.1 Example value hierarchy.
Objectives can be characterized as fundamental or means. A fundamental objective is something that is desired for its own sake, at least in the context of the problem at hand. A means objective provides a method of accomplishing a fundamental objective. The subobjectives shown in Figure 3.1 would qualify as fundamental; if the subobjectives under “Collect Data” were “Fast Slew Rate” and “High Quantum Efficiency,” these would be means objectives. In general, fundamental objectives are preferable because they reflect what the DM really wants, rather than a way of getting it. On the other hand, it is often easier to identify natural and practical measures for means objectives.
The measures at the lowest level of the value hierarchy can be natural, proxy, or constructed. A natural measure is on a scale of engineering units of time, distance, or some other physical quantity, and it directly measures what is of interest, as “minutes” provide a direct natural measure of “Response Time.” A proxy measure is also in engineering units, but measures something associated with the desired outcome instead of the outcome itself. For instance, “Weight” might be used as a proxy measure for “Cost,” since the weight of a device is often closely associated with its cost (among devices of the same type). If there is no natural or proxy measure that is feasible and practical and cost-effective, recourse may be had to a constructed scale. In the simplest form, this can be no more than an expert’s judgment that an alternative is “Excellent” or “Good” or “Fair” or “Minimally Acceptable” on the given objective. However, for public sector decisions with large consequences, a more complete, clear, and detailed scale may be necessary. Such a scale can be constructed by describing the best performance in a few sentences, then describing the minimally acceptable performance in a few sentences, then adding similar descriptions of intermediate points, for a total of perhaps five. Chapter 3 of Clemen and Reilly (2001) includes good discussions of both natural, proxy, and constructed measures, and of fundamental and means objectives.
Some objectives can be very difficult or impossible to measure objectively. This might include an objective like “User Friendliness” in a system acquisition, or “Future Development Potential” in a decision about research direction. There is an understandable tendency among decision makers, perhaps especially among practical-minded military ones, to focus on what is objectively measurable. This can be carried too far. Some things are important even if they are not easily measurable. It is better to have an imprecise measure of an important objective than to ignore it altogether. If no natural or proxy measure is available, then a suitable constructed measure, backed by the judgment of credible stakeholders and subject matter experts, should be developed.
The measures at the bottom of the value hierarchy are what count when it comes to calculating multiattribute value. The higher levels are there to provide organization, clarity, and communication. Needless to say, a sound set of measures, based on a cogent identification of lowest-level objectives, is crucial to the success of the analysis.
An alternative is a possible decision, anything that might be done in response to a problem. As noted above, the decision outcome cannot be better than the best alternative, so it is sometimes worthwhile to spend more time finding a better solution than finding the absolute optimal among a set you already have. Do everything you can to generate a wide variety of good alternatives. Do not start with a group “brainstorming session” – those tend to be dominated by the first ideas presented by the most forceful personalities, even when people are instructed to suspend judgment. Instead, have people do independent thinking first, then combine the results. Perhaps a new and better alternative can be generated by combining features of two others. Give your subconscious time to operate. Challenge your constraints. Look for alternatives that perform particularly well on each objective. Set high aspirations, since you won’t do any better than your aspiration level. Ask for suggestions and research what others have done. Do not stop until you have a range of distinctly different alternatives and if possible at least one alternative that does well on each objective, or until time spent on other aspects of the problem (or on other problems) would be more productive.
Let us suppose that we have a well-defined problem, a set of objectives for the problem, and a set of measures for the objectives to determine the value of an alternative that we might select. Suppose also that we have a set of alternatives, and that we have evaluated the alternatives on the objectives, using the scales of measure. This gathering of data can easily be the most difficult part of a decision problem, but we will assume it has been done. The next thing to do is to collect the results in a consequence table, with one column for each alternative and one row for each measure, as shown in Table 3.1 for three hypothetical alternatives using the value model in Figure 3.1.
Alternatives |
|||
---|---|---|---|
Measures |
Albatross |
Bluebird |
Canary |
1.1.1 Coverage (km^{2} per image) |
10 |
20 |
20 |
1.2.1 Ground Sample Distance (cm) |
11 |
18 |
23 |
1.2.2 Number of Spectral Bands (count) |
6 |
3 |
3 |
2.1.1 Response Time (min) |
15 |
20 |
20 |
3.1.1 Interoperability (constructed scale) |
“Good” |
“Fair” |
“Minimally acceptable” |
The consequence table will be invaluable for communicating results, but sometimes it can also enable us to simplify or even solve the problem. If one alternative is at least equal in performance to another in every measure, and strictly better in at least one, then it can be said to dominate the second, which can be eliminated from further consideration (after due diligence to ensure no important consideration has been left out). In Table 3.1, it can be seen that Bluebird does just as well as Canary in Coverage, Number of Spectral Bands, and Response Time, and better in Ground Sample Distance and Interoperability, and so Bluebird dominates Canary, which can be eliminated. Even if there is no strict dominance, it may be that one alternative does much better in some measures, and at most, only slightly worse in others. In the most favorable cases, the best decision can be found by looking at the consequence table, with no further formal analysis required.
If a multiattribute decision is perplexing enough to require detailed analysis, especially if it is a public decision that must be explained and defended, then value modeling is likely to be the best approach. It offers an objective, quantitative, transparent, logical, and theoretically sound method to estimate the overall value for each alternative (Keeney & Raiffa, 1993; Watson & Buede, 1987; Kirkwood, 1997). It can also provide a lot of insight into the problem.
A value model starts with the attributes or measures identified in the lowest level of the value hierarchy (Figure 3.1). An additive value model has this mathematical form:
where v is the multiattribute value function, x is a multiattribute alternative, n is the number of measures, w _{i} is the swing weight of the i ^{th} measure, v _{i} is the single-attribute value function for measure i, x _{i} is the score of alternative x on measure i, and ${{\displaystyle \sum}}^{\text{}}{w}_{i}=1$ , as described in the following sections. Thus, once the measures are identified and the alternatives all scored on them, only two things are necessary to calculate multiattribute value: the set of swing weights w _{i} and the set of single-attribute value functions v _{i} . For decisions under certainty with n ≥ 3, when there is mutual preferential independence among the attributes, we can be assured that there is a set of weights that will reflect the decision maker’s true values, so this additive form is very widely applicable. (Preferential independence means that preferences among levels in each attribute are the same regardless of the levels of other attributes. This condition is commonly met. See Chapter 3 of Keeney & Raiffa, 1993, for details.)
The proper first step in creating a quantitative value model from a set of measures is neither determining the swing weights w _{i} nor determining the form of the functions v _{i} , but instead, simply determining the range of variation of each measure. This is partly to establish the domain over which each v _{i} will be defined. More importantly, it is necessary in order to assign meaningful swing weights. In the context of multiattribute alternative comparison, measures and attributes in the abstract do not have value – only particular levels of measures and attributes have value. It is a fallacy to think that one can meaningfully attribute value to a particular measure without considering what range of scores on the measure one is referring to. Doing so may be the most common mistake in multiattribute decision analysis.
There are two ways to establish the range of variation of a measure. The first is to look at the range of alternative performance in the measure. This has the advantage of focusing attention on the real trade space. However, it encourages “Alternative-Focused Thinking” rather than the recommended Value-Focused Thinking. Also, if a new alternative comes to light that scores outside this range it may force you to redo some of your work quantifying the value model. The second method is to use the range from some minimally acceptable level to an ideally desired level. This has the advantage of encouraging development of new high-value alternatives. However, if the ideal level is set too high, it can divert attention to pondering the value of the unattainable. The recommended practice is to find an intelligent compromise between the two methods. One caution: the identified minimum-performance level must represent truly acceptable performance. A shift from unacceptable to acceptable within the range of the measure is not consistent with the additive form of Equation (3.1). An unacceptable minimum performance may be the second-most common mistake in multiattribute decision analysis.
Single-attribute value functions for the n measures are necessary for two reasons. The first is that the different measures are generally in different units, as they are in Figure 3.1, and they must be converted into common units of value in order to be added to each other. The second reason is to account for returns to scale. Value may not increase at the same rate as the measure moves from its low-performance limit to its high-performance limit. For instance, the scale for Response Time in Figure 3.1 might run from 1 minute (best) to 60 minutes (minimum acceptable performance), but most of the value might be lost after 5 minutes, with a much more gradual loss of value after that. The value function for Response Time will capture this effect.
The domain (x-axis) of each function v _{i} is the range of the measure as identified above, and the range (y-axis) of the function is an interval scale that is conventionally taken to be 0 to 100. Let ${x}_{i}^{-}$ be the minimum-performance score on the measure i, and let ${x}_{i}^{+}$ be the maximum-performance score. Then ${v}_{i}\left({x}_{i}^{-}\right)=0$ and ${v}_{i}\left({x}_{i}^{+}\right)=100$ . If more is better on measure i, then ${x}_{i}^{-}$ < ${x}_{i}^{+}$ and v _{i} is monotonically increasing, but if less is better, then ${x}_{i}^{-}$ > ${x}_{i}^{+}$ and v _{i} is monotonically decreasing. (Occasionally the stakeholders will identify an intermediate measure value that is best, so that ${x}_{i}^{+}$ occurs in the interior of the domain of v _{i} , which will then be non-monotonic and go to zero at both ends of its domain.)
In the simplest cases, it is acceptable to assume that value increases linearly as measure i goes from ${x}_{i}^{-}$ to ${x}_{i}^{+}$ , so that
More complex functions have been used to represent nonlinear returns to scale in a way that can be parameterized (e.g., Burk et al., 2002), but these functions put a priori constraints on the form of the value function that have no theoretical justification. It is usually just as easy to use the following midvalue splitting procedure to elicit a piecewise linear approximation of a nonlinear return to scale (Keeney & Raiffa, 1976). Using measure 2.1.1 in Figure 3.1 as an example, i.e., minutes of response time, suppose that we have established that ${v}_{\mathrm{2.1.1}}\left(60\text{min}\right)=0$ and ${v}_{\mathrm{2.1.1}}\left(1\text{min}\right)=100$ . Then we can establish a midpoint in value by asking a suitable group of stakeholders and subject matter experts to tell us when half the value is lost as response time goes from 1 min to 60 min. If the answer is 4 min, we have ${v}_{\mathrm{2.1.1}}\left(4\text{min}\right)=50$ . Further questions might ask when half the value is lost in going from 1 min to 4 min, and then in going from 4 min to 60 min, perhaps resulting in ${v}_{\mathrm{2.1.1}}\left(2\text{min}\right)=75$ and ${v}_{\mathrm{2.1.1}}\left(10\text{min}\right)=25$ , respectively. These points can be used to define a piecewise linear function, as shown by the solid line in Figure 3.2. If desired, additional points can be elicited to provide additional precision. Of course, this method imposes its own functional form, which also has no theoretical justification, but the method has the flexibility to approximate any reasonable function to any desired precision. These functions measure value, which is inherently subjective, and it will not be possible to achieve engineering precision anyway. The analyst may feel an urge to replace the piecewise linear function with a smoother analytical approximation; this temptation should be resisted. The breakpoints are directly elicited and they should be respected. Also, a smooth function will create a spurious impression of precision. The straight lines on the graph will remind us that these results are not to be taken to three significant figures.
Figure 3.2 Example approximate single-attribute value function.
The solid line in Figure 3.2, which is concave from above, represents an “increasing return to scale” in which the rate of value accumulation increases as one approaches the best performance. The gray dotted line would represent a “decreasing return to scale.” The distinction between increasing and decreasing return to scale should not be confused with the distinction between more-is-better (monotonically increasing) and less-is-better (decreasing) functions. Both the curves in Figure 3.2 show less-is-better. A more-is-better curve with an increasing return to scale would be increasing to the right and concave from above. It is also not uncommon to see curves that are not consistently either increasing or decreasing in return to scale, but rather have an S shape, with most of the value increase coming in the middle of the x _{i} range.
The single-attribute value functions v _{i} (x _{i} ) and the multiattribute value function v(x) are sometimes improperly called “utility” functions. This comes from the seminal work of von Neumann and Morgenstern (1947), who used the term “utility function” for functions elicited using reference lotteries for decisions under uncertainty. Utility functions are used mathematically to evaluate alternatives in a way that is very similar to value functions, but they are elicited differently. They capture both return to scale and attitude toward risk. Value functions capture only return to scale.
It remains to elicit the swing weights w _{i} . These relate the value to the stakeholders of the swings from ${x}_{i}^{-}$ to ${x}_{i}^{+}$ on each of the attributes. Many good ways of doing this have been proposed, including using visual or tactile aids such as physical coins or tokens to distribute among the objectives, bar or pie graphs to partition, or imagined balance beams. The following method has been found to practical, easy to understand, and as precise as the domain allows. It is also technically valid.
The first step is relatively easy: rank order the n swings. This requires eliciting answers to questions of the form, “Which is of more value: a swing from ${x}_{i}^{-}$ to ${x}_{i}^{+}$ on attribute i, or a swing from ${x}_{j}^{-}$ to ${x}_{j}^{+}$ on attribute j?” By repeating this question with different pairs of attributes, one can rank all of them from the one with the most important swing to the one with the least. For an initial quick look, or when time is short and/or consequences small, approximate surrogate swing weights can be derived directly from the ordinal information. A number of ways to do this have been proposed, including rank sum (Jia, Fischer & Dyer, 1998), rank reciprocal (Stillwell, Seaver & Edwards, 1981), sum reciprocal (Danielson, Ekenberg & He, 2014), equal ratio (Lootsma 1999), rank order centroid (Rao & Sobel, 1980), and rank order distribution (Roberts & Goodwin, 2002). An empirical study (Nehring, 2015) found that the rank sum method is closest on average to matching weights derived from elicitation. Using this method,
where w _{(} _{i} _{)} is the i ^{th} largest swing weight and n is the number of objectives.
In most cases it will be better to go ahead and elicit the attribute swing weights themselves, rather than rely on surrogates. This will increase precision and reduce opportunities to question the results. To do this, start by assigning a relative non-normalized swing weight ${\omega}_{\left(1\right)}=100$ to the most important attribute. Then ask the stakeholders to give the relative importance of the second-most important swing, yielding perhaps ${\omega}_{\left(2\right)}=85$ . Continue down the ordered list of attributes until a relative weight is elicited for ${\omega}_{\left(n\right)}$ . Finally, normalize the weights so that they sum to 1:
It is important to remember at each stage of the elicitation process that what are being compared are specific swings from low-performance values ( ${x}_{j}^{-}$ ) to high-performance values ( ${x}_{j}^{+}$ ) in each measure. We are not comparing the objectives in any abstract sense, or the values of swings from worst imaginable to best possible.
If the number of measures n is very large compared to the patience of the stakeholders, it may be necessary to simplify the elicitation process by asking for local weights at each level of the value hierarchy. For instance, in Figure 3.1, measure 1.2.1 would be weighted against 1.2.2 only, perhaps yielding ${w}_{\mathrm{1.2.2}}^{\text{loc}}=0.4$ (and consequently ${w}_{\mathrm{1.2.1}}^{\text{loc}}=0.6$ ). Then subobjectives 1.1 and 1.2 can be compared, perhaps yielding ${w}_{1.2}^{\text{loc}}=0.3$ , and main objectives 1, 2, and 3 can be compared, perhaps yielding ${w}_{1}^{\text{loc}}=0.5$ . Then the global weight for measure 1.2.2, i.e., the weight used in Equation (3.1), will be the product of the local weights up the hierarchy: ${w}_{\mathrm{1.2.2}}=0.4*0.3*0.5=\mathrm{0.06.}$ This procedure will ensure that the global weights sum to 1, as they are required to do, while simplifying the required elicitations by asking for comparisons between related measures only. A disadvantage is that it may be hard for the stakeholders to keep in mind what the local weight really means at a higher level in the hierarchy: the value of a simultaneous swing of all included measures from their minimum-performance levels to their maximum-performance levels.
Once each w _{i} , v _{i} , and x _{i} is in hand, it is straightforward to calculate the multiattribute value of an alternative x using Equation (3.1). The result can be interpreted as the relative value of the alternative on a 0 to 100 scale, where 0 is the value of a hypothetical alternative that performs at the minimum acceptable level on each measure, and 100 would be the value of an alternative that performed at the highest level on each measure. (Note that a multiattribute value score of 0 does not mean that the alternative has zero value in the ordinary or absolute sense – it means that its increment of value over the minimum acceptable is zero.)
Figure 3.3 shows how calculations might be completed for the two nondominated alternatives in Table 3.1, based on the value model in Figure 3.1. The definition of the value function has been completed with notional value functions and attribute swing weights. The numerical results can also be presented insightfully in a stacked bar chart of weighted value, as shown in Figure 3.4. Note that Albatross scores better than Bluebird in four out of the five measures, but Bluebird still scores higher overall. This is because its substantial advantage in the most important measure has been judged by the stakeholders (via the elicitation for value functions and swing weights) to be more important than all Albatross’s advantages in all the other attributes put together. It is also noteworthy that neither alternative scored particularly well in Response time, which might suggest a search for another option that does well on that attribute.
Figure 3.3 Example multiattribute value calculations.
Figure 3.4 Example stacked bar chart of weighted value.
Of course, a conclusion reported as “Bluebird beats Albatross by a score of 53.2 to 45” will not convince any decision maker. The conclusion has to be translated back into the terms in which the problem was posed. In this case, it might be phrased like this:
Bluebird has an overwhelming advantage in coverage performance. Albatross has some advantages of its own, but they are all less important and together do not take the advantage away from Bluebird. Albatross’s biggest advantages were in ground sample distance and interoperability, but the stakeholders found these differences in performance to be much less important than coverage for this decision.
Monetary cost is often an important consideration, in public as well as private sector decisions. This can be simply acquisition cost, as in an off-the-shelf procurement, or it can include various development, testing, acquisition, deployment, maintenance, upgrade, and finally, disposal costs over a long period of time. Often these can be combined into one net present value, using some annual discount rate for future costs. (In commercial applications, net present value would include future revenue streams.) To include these costs in a multiattribute decision, one common approach is simply to include cost as an attribute. However, it is usually more insightful to keep track of cost separately from performance value and to plot the alternatives on a cost vs. value scatterplot, as in Figure 3.5. This will reveal when a small increase in expenditure will yield a large increase in performance value, and when a large cost savings can be had with little loss of performance. Also, it will reveal when one alternative has better performance and lower cost than another, thus dominating it. The set of nondominated alternatives define the efficient frontier of the plot.
Figure 3.5 Cost vs. value scatterplot.
Decision analysis is about modeling preferences, and it requires eliciting preferences and values from the decision maker and from those with a legitimate stake in the outcome of the decision. The decision will only be as good as the information that the modeler can get from these stakeholders. This is not always an easy process. The stakeholders may not be accustomed to thinking in decision-analytic terms of measures, scales, weights, value functions, and so forth. However, these concepts are intuitive enough that people usually pick them up fairly quickly. The analyst’s job is to lead them into the correct frame of mind while eliciting from them what they know. It is not always as hard as one might expect.
Sometimes the situation is complicated because more than one DM must agree on the decision. Even if there is one person with the formal authority to make the decision, in practice, he or she may have to get agreement, or at least acquiescence, from a group of stakeholders. Somehow the values of all these people have to be represented in one value model. The ideal approach is to get all the stakeholders together in one room with a trained decision analyst to lead them through the process, and have them develop and agree to a consensus value hierarchy, set of measures, value functions, and weights. The discussion is likely to lead to improved understanding and ultimately to a better value model. Unfortunately, people at the level where they can be credible sources for value on a major decision will be very busy and it will be hard to get them to commit the time required for this process. This may lead to repeated rounds of interviews, smaller meetings, drafts, comments, and revisions before consensus is reached.
One hopes that a consensus value model can be agreed on, just as one hopes the US Congress can agree on what laws the country needs. Sometimes it seems that consensus will be impossible because key stakeholders just have contrary values. In the military domain, this probably does not happen as often as it does in the public sector in general, since all stakeholders have the common ultimate goal of defending the country. They only disagree about the best means to that end. Stakeholders may differ about which objective is more important, but it is seldom the case that one thinks an objective is worth 95% of the problem and another thinks it is only 5%. More often it is more like 60% vs. 40%. If two conflicting stakeholders have the time to sit down and discuss their values and their justifications for them, there is hope that they will gain more mutual understanding and arrive at a compromise. This is especially true if the discussion takes place in the presence of a mutual superior. Sometimes initial estimates of value have some character of negotiating positions rather than carefully considered and firmly held beliefs.
One obvious approach to the problem of multiple stakeholders with different value structures is to simply ask them to vote on the alternative models. This approach should be viewed with extreme caution. The problem is that there is no way to aggregate votes on three or more alternatives that is without serious flaws. For instance, one might try “instant runoff,” in which each voter ranks the alternatives from most preferred to least. Then the alternative with the fewest first-place votes is eliminated and the votes distributed to the voters’ second choices. This is repeated until one alternative has a majority. But suppose 30% of the voters prefer A to B to C, 36% prefer B to A to C, and 34% prefer C to A to B. On the first ballot A is eliminated, and on the second B wins. This violates majority rule, since 64% of the voters preferred A to B. Another approach using voter ranks would be to weight the votes in a Borda count: first place among n alternatives receives n points, second place receives n − 1, and so forth. This method can also violate majority rule. If there are five voters and two prefer A to B to C, two prefer B to C to A, and one prefers C to A to B, then B wins, though 60% of the voters prefer A to B. In fact, Arrow (1950) proved that there is no way to aggregate votes for three or more alternatives that does not have a flaw like this or one just as serious. This is known as “Arrow’s Impossibility Theorem.” The conclusion is that voting is a poor way to make a group decision if there are more than two alternatives.
The best approach when there are multiple stakeholders with apparently different values is to facilitate a discussion of what’s important in the problem until they come to consensus on a value model. This can be difficult, but in a military application it is not hopeless because there is an underlying common goal. If agreement on one value model seems beyond reach, different value scores can be calculated based on the values of different stakeholders and the results compared. Sometimes there is little difference in the final results because the differences in multiattribute value are caused mostly by differences in performance measure score (x _{i} ) and not by the weights (w _{i} ) and value functions (v _{i} ). Analysis of where the important differences lie may lead to better understanding and perhaps consensus. If time is short and a decision must be made (e.g., military operations, or captaining a ship at sea), a single decision maker must be identified. If all but two alternatives can be eliminated, voting is safe because Arrow’s theorem no longer applies.
In previous sections we have assumed that the outcomes of all possible decisions are known with certainty. In defense applications that is often not the case. Chance reigns on the battlefield, and it is also often prominent before and after that point. Sometimes system acquisition decisions must be made before the technical outcome of system development is known with perfect confidence. Force development and training decisions have to be made before the future political state of the world can be known, and before the armed forces can know what kind of war they will be asked to fight. This aspect of DA is worthy of a chapter of its own, and indeed many chapters. This section will identify some of the main concerns when analyzing a multiattribute decision under conditions of uncertainty: expected value decisions, subjective probability elicitation, expected utility decisions, and limitations of the additive model. It will then give a recommended approach.
One crude approach to uncertainty is to simply include “Risk” or “Uncertainty” as one of the attributes, to indicate more value from alternatives that have less uncertainty. This does not do a very good job of capturing the effects of having multiple possible outcomes from a given decision. A better approach is the classical way to lay out a decision under uncertainty: putting it into a decision tree, such as the one in Figure 3.6. Squares in a decision tree represent choices and circles represent chance events. Outcome probabilities for the chances are given to the right of the corresponding circle. Final consequences are shown on the right. For simplicity, we will begin by supposing there is a single decision criterion that is to be maximized. This criterion could be additive multiattribute value as in Equation (3.1) (subject to some important caveats which will be explained below), but we will start by using dollar return for illustration. The decision tree makes it easy to determine the best course of action according to expected value (EV): starting from the right, attribute to each chance node the expected value of its possible outcomes, and to each decision node the value of the option that leads to the greatest EV. In Figure 3.6 that would be the Stocks option, since the expected value is $12K, versus $11.4K for Bonds and $11K for Bank.
Figure 3.6 Decision tree for a simple investment decision.
Note that selecting the Stocks option in Figure 3.6 could result in losing everything because of bad luck. In ordinary speech one might say that picking Stocks was therefore a bad decision if that had happened. From an analytical point of view that is not correct. A good decision can lead to a bad result because of bad luck, just as a bad decision can lead to a good outcome because of good luck. This distinction between good decisions and good outcomes is one of the most important things to understand about decision analysis under uncertainty.
A complicated decision situation can involve a number of decisions under uncertainty made over time, with the options and probabilities at later decisions determined in part by the outcome of earlier chance events. For instance, a major system acquisition may involve a decision on what technologies to pursue, a decision on what kind of system to develop based on the result of technology development, a decision on which system configuration to acquire based on system development results, and so on. In financial matters, an outcome may include a regular stream of income or expenses for a period of time in the future (these are usually discounted by a fixed factor for every year in the future before they happen, to give a net present value, or NPV, to be used for current decision making). There may be an opportunity to structure a set of future decisions by buying a financial option, pursuing a certain research project, or otherwise incurring a certain but limited current cost in order to create the possibility of a very advantageous decision in the future. A complex situation like this can be captured in more complex decision tree, such as Figure 3.7.
Figure 3.7 Complex decision tree.
However complex, a decision tree is always straightforward to evaluate to get the best decision based on expected value, starting from the right and working to the left, assigning an EV to every node along the way. This will also show the best decision to make at every square decision node, should the result of the various choices and chances result in arriving at that node. Decision trees are easy to draw, understand, and evaluate, and they accurately and clearly capture the effects of chance events and decision interleaved over time. However, they can become huge and unwieldy as the situation becomes more complex (this can be ameliorated to some extent with software; see below). They also can include much repetition, e.g., when the same chance is encountered for any decision at a given node. Nevertheless, a decision tree is usually the first step in understanding a complex decision under uncertainty.
Once the structure of a decision tree is established, the most troublesome remaining task is usually determining the probabilities. Probabilities can be needed for things that haven’t happened yet, or for things that have happened, but for which we don’t yet know the outcome. The former might be a future political change or the outcome a research project; the latter might be the existence of extractable oil at a given drilling site. Both types of probability can be treated the same way. In the best situation, objective data may be available that allow one to estimate a probability. For instance, a geologist may know of 35 times when wells were drilled in certain geological conditions, and in 13 of them oil was found, giving an estimate of 37% probability of striking oil at a similar new drill site. More often, subjective probability estimates must be elicited from the decision maker or from acceptable subject matter experts. Chapter 8 of Clemen and Reilly (2001) provides some specific methods.
The method of deriving a best decision described above will result in getting the highest possible expected value, but when is it best to decide according to EV? For the sort of common decisions that are made every day or every week, it is usually best to decide according to EV. The Law of Large Numbers assures us that the sum of the outcomes of all the chancy decisions will tend to approach the sum of the expected values, so by maximizing EV we maximize our total result in the long run. However, for rare or once-in-a-lifetime decisions the Law of Large Numbers does not apply and expected value may not be the best criterion. For instance, in Figure 3.8 almost everyone would choose option B, though option A offers much better expected value. In situations like this, decision analysis cannot tell us which option to take without more information on the attitude of the decision maker toward risk.
Figure 3.8 A decision best made not on expected value.
Attitude toward risk can be elicited from a DM in the form of a utility function like those shown in Figure 3.9. The x-axis is the range of possible outcomes and the y-axis is an interval scale in arbitrary units that is traditionally termed “utility,” and is usually scaled from 0 (for the least preferred outcome) to 100 (for the most preferred). A utility function looks very like a single-attribute value function (Figure 3.2), but it is elicited very differently. Reference lotteries must be used to capture attitude toward risk. For instance, to develop a utility function over the domain in Figure 3.9, one would ask, “What fixed amount would you be willing to exchange on an equal basis for a 50/50 chance of winning $101,000 or losing $100,000?” The answer to this question, called a certainty equivalent, provides the Utility = 50 point on the utility function. If the answer is $500, then the DM is going by expected value and the point falls on the solid line in Figure 3.9. If the answer is −$34,000 (i.e., the DM would pay $34,000 to be excused from the lottery), then the Utility = 50 point falls on the dotted line, and the DM is risk-averse. If the answer is $35,000 (i.e., the DM would want that amount of money to forego a chance at the lottery), then the point falls on the dotted line and the DM is risk-seeking. Follow-up questions to find the certainty equivalents for 50/50 lotteries between outcomes of known utility can fill in more points on the utility function to any desired degree of precision. Needless to say, it can be difficult to get decision makers or stakeholders to commit the time required to carefully consider these theoretical and high-stakes lotteries and give truly well-thought-through certainty equivalents. Nevertheless, some such elicitation is required to capture the decision maker’s utility function. It should also be remembered that utility functions are specific to a given decision maker and specific to a given decision situation.
Figure 3.9 Example utility functions.
Once the utility function is found, it is straightforward to determine the recommended decision in any situation represented in a decision tree. Simply replace each outcome with its utility, and evaluate the tree based on expected utility rather than expected value. For instance, if for the situation in Figure 3.8 we set U(−$100K) = 0, U($101K) = 100, and elicited a utility function such that U($50) = 70, we would find that the utility of the sure thing exceeded the utility of the gamble, and so recommend taking the $50, as almost any person would do.
It is perhaps astonishing that we can be sure that in principle there is implicitly in the mind of any coherent decision maker facing any chancy problem a utility function such as that shown in Figure 3.9, and that all we have to do is elicit it and put it on paper, and that the decision based on expected utility will be the correct one. This result is due to von Neumann and Morgenstern (1947). The proof depends only on a few axioms that almost any DM will accept: ranking (any two outcomes are either equally preferred or one is preferred to the other), transitivity (if lottery A is preferred to B and B to C, then A is preferred to C), and a few others more technical to state but equally hard to deny.
One might be tempted to create an additive multiattribute utility function in the same form as Equation (3.1), only substituting utility functions ${u}_{i}\left({x}_{i}\right)$ elicited via reference lotteries for the single-attribute value functions ${v}_{i}\left({x}_{i}\right)$ . Unfortunately, this straightforward approach can only be used with extreme caution. Any multiattribute utility function of this form implies that the DM will be indifferent between (1) a 50/50 lottery between an alternative that is best on two attributes and another alternative that is worst on the same two, and (2) a 50/50 lottery between attributes that are best on one and worst on the other. Many decision makers would prefer (2) in a choice like this. It can be shown that an additive model like Equation (3.1) will accurately capture a decision maker’s preference only if his or her preference structure shows additive independence in the situation, meaning that preferences between lotteries in one attribute are not affected by changes in lotteries in other attributes. This is much stronger than the preferential independence that is sufficient to justify the additive model under certainty. In fact, additive independence often seems not to hold in practice (von Winterfeldt & Edwards, 1986). There are functions more complex than the additive utility function that can be used to model DM preference without assuming additive independence, but they require more (and more difficult) elicitations from the DM. His or her patience will likely be taxed enough by the elicitation required for the additive model. Also, the results of modeling based on reference lotteries may well be harder for the DM to understand, accept, and be willing to take action on.
So what is a decision analyst to do to assist a decision maker with a multiattribute decision under uncertainty? In most cases it is probably best to start with the standard value model as developed under conditions of certainty, even though it implies dubious assumptions of risk neutrality and additive independence. Then if uncertainty seems to be a big part of the problem, ask a few test questions about reference lotteries to see what the situation is. Often risk-aversion, non-additive utility, and so forth are relatively small, at least compared to the main effects of value decomposition and alternative performance. In general, start with a simple model and only add complexity when it becomes clear that it is needed to capture the essential features of the problem. A complex model can quickly become too difficult to explain or understand, at least for non-specialists. If the decision maker cannot understand the model, he or she is unlikely to accept its results.
The basic idea of sensitivity analysis (SA, also known as post-optimality analysis) is to re-accomplish a finished value calculation after changing one or more of the parameters that went into the model, in order to see if the result changes. Let us suppose that a decision analysis has resulted in a recommended decision. Almost inevitably, some of those affected by the decision will dislike the recommendation. They will examine the modeling that supports it, looking for aspects they can question so as to throw its soundness into doubt. Since decision analysis necessarily relies on elicited subjective values that can never be known with engineering precision, they will find them. For this reason, it is always a good idea to do some sensitivity analysis to see to what extent the final recommendation depends on debatable inputs. SA also provides insight into what really matters in the problem, and is usually worth doing for that reason alone.
In the MADA example that resulted in recommending Bluebird over Albatross (Figure 3.3 and Figure 3.4), the multiattribute value scores for the two alternatives depended on three types of inputs: the swing weights ${w}_{i}$ , the single-attribute value functions ${v}_{i}$ , and the performance scores ${x}_{i}$ for the two alternatives. SA can be done on any of the three, but swing weights are the most common subjects because they can easily be seen to determine the answer and are wholly subjective in nature. The single-attribute value functions do not have so obvious an effect on the result, and the performance scores are generally based on engineering data. For decisions under uncertainty like Figure 3.6, elicited subjective probabilities are also prominent candidates for SA. This section will describe two methods for displaying SA results when one input varies at a time (one-way sensitivity analysis), using swing weights and probabilities as the subjects of investigation. For more detail on these methods, and other methods (including two- and three-way SA), see Chapter 5 of Clemen and Reilly (2001).
In the Albatross vs. Bluebird analysis, let us suppose that the Albatross program manager is distressed that his system was not selected. He protests that since it performed better on four of the five measures, it should be showing up better in the value model. He concludes that something must be amiss with the subjective swing weights. To address his concern, one can construct a tornado diagram showing the results of varying each of the swing weights by (say) ±0.1, as shown in Figure 3.10 (the name and concept of tornado diagrams are from Howard, 1988). This is constructed by recalculating the multiattribute value of Bluebird with each swing weight above and below its baseline value, and showing the results as horizontal bars from the lower resulting value to the higher, sorted with the longest bar at the top. For instance, the baseline swing weight of Coverage is 0.35, so the low value would be 0.25. The other swing weights need to be adjusted to maintain the total of 1, while keeping the same proportions to each other, so the weights 0.09/0.06/0.3/0.2 become approximately 0.104/0.069/0.346/0.231. When these weights and Bluebird’s single-attribute value scores are put into Equation (3.1), the resulting multiattribute value is 46, which is the left-hand limit of the Coverage bar in Figure 3.10. When all the bars are plotted with the longer ones on top, they form a tornado shape centered on the baseline value, in this case 53.2. When they are compared to Albatross’s baseline value of 45, it is easy to see that the w _{Cov} = 0.25 case is the only one that comes close. This can focus the discussion on what Albatross’s score is with those weights (it is ~46.8) and whether it is really reasonable to consider such a low weight on Coverage. The swing weights should have been determined by consensus in an elicitation process that included all stakeholders, including the Albatross program manager, and hopefully it will not be judged credible that the elicitation was wrong by so great an extent. This will justify the decision in favor of Bluebird.
Figure 3.10 Tornado diagram example.
A tornado diagram is an excellent way to show the results of sensitivity analysis on many parameters varied one at a time. To look in more detail at the effect of varying one parameter, a sensitivity graph is appropriate. Such a graph shows alternative scores as a function of a single parameter. Suppose in the problem shown in Figure 3.6 one wanted to explore the impact of uncertainty in the probabilities. One could plot the expected value of each alternative as a function of probability, as shown in Figure 3.11. This example shows the x-axis extended to cover all possible values of probability, which has the advantage of guaranteeing that no possibility will be left out. The baseline estimates of probability for both Stocks and Bonds are shown with vertical lines. It is easy to see that the selection of the best alternative is strongly sensitive to the probability estimates, particularly in the case of Stocks. Even if one were willing to decide based on expected value, this graph might make one pause to consider how much faith one really had in one’s estimates of probability.
Figure 3.11 Investment decision expected outcomes as a function of probability of success.
Tornado diagrams and sensitivity graphs are only two of a large number of ways to present sensitivity analysis. The method used should be selected to explore whatever is most controversial, most doubtful, or most likely to change the result in the analysis. The purpose of SA is to give confidence that the recommended decision is unlikely to change as the result of any reasonable change in the input parameters. Failing that, SA will show which input parameters should be investigated more closely to get firmer estimates of their correct values.
All the quantitative techniques discussed in this chapter can be fairly easily implemented using standard spreadsheet software. However, there are dozens of commercial software packages available that can make the implementation even easier, provided one is careful to get a package that has the capabilities needed for the problem at hand. The magazine OR/MS Today publishes a very useful biennial survey of decision analysis software in alternating October issues (e.g., Oleson, 2016; Amoyal, 2018). This provides capability, pricing, training, and vendor information on a wide variety of DA tools.
The practical application of these methods was demonstrated in the US Army’s development of a multiattribute military value model for Army bases in response to Congress’s call for a round of Base Realignment and Closure (BRAC) in 2005 (Ewing, Tarantino & Parnell, 2006; Center for Army Analysis, 2004). This effort required an assessment of the value of Army bases that involved many disparate attributes: maneuver space, firing ranges, environmental impact, training facilities, expansion opportunity, mobilization and deployment capability, accessibility, logistics, local workforce, and so on. The decision to re-base Army units and close installations also had very significant impacts on many stakeholders, most notably all parts of the Army, the local communities near losing and gaining bases, and their political representatives. This made it a good candidate for formal multiattribute decision analysis. This example is interesting because of its scope and importance, and also because it extends the approach given in this chapter in a few ways to account for particular issues in BRAC. It is also of interest because research is underway now to generalize and improve the model in case Congress calls for another round of BRAC.
The project began with a research effort, including document reviews and interviews with senior leaders. Relevant Army, Department of Defense (DoD), Joint Service, and other documents were reviewed and summarized. Hour-long interviews were conducted with 36 senior Army leaders, following a carefully designed protocol. Based on this research, a qualitative value hierarchy was developed that divided overall military value into six capabilities that were important to an installation’s value (Support army and joint training transformation; Maintain future joint stationing options; Power projection for joint operations; Support army materiel and joint logistics; Achieve cost-efficient installations; Enhance solder and family wellbeing). These were broken down into twelve capabilities at an intermediate level of the hierarchy, and finally into 40 value measures at the lowest level.
Of the 40 value measures, 26 were single-dimensional natural or constructed scales, and single-attribute value functions were elicited for them for value on a 0–10 scale. Some of these value functions were linear in form like Equation (3.2), but for others the experts consulted made a compelling argument that the function should be nonlinear. For these an exponential functional form was assumed and a midlevel splitting approach was used (Kirkwood, 1997):
where ρ _{i} is a parameter derived from the elicitations. The other 14 value measures had multidimensional constructed scales. The method of dealing with these can be shown by the example of Heavy Maneuver Area, which had two dimensions: Total Area and Largest Contiguous Area. The range of variation in these dimensions was partitioned into four intervals (<10, 10–50, 50–100, and >100, in 1000s of acres) and a 4 × 4 table built, then value on a 10-point scale was directly elicited for each entry in the table.
Swing weights for the 40 measures were elicited fundamentally as described in this chapter, but with the use of a swing weight matrix, as introduced by Trainor et al. (2004). A table was built with six columns for six levels of ability to change the measure, from immutable (e.g., Heavy Maneuver Area) to relatively easy to change with a modest expenditure (e.g., General Instructional Facilities). The table had three rows to represent high, medium, and low range of variation in the measures $({x}_{i}^{-}\text{to}{x}_{i}^{+})$ among the alternatives. Each of the 40 measures was then assigned to one of the 18 table entries. Those measures that were judged immutable and had a high range of variation were given a non-normalized swing weight of 100, those in the opposite corner received a non-normalized swing weight of 1, and the others got appropriate intermediate weights. These weights were then normalized as in Equation (3.4).
With the measures, value functions, and swing weights determined, the military value of all Army installations could be calculated according to Equation (3.1). A complete discussion of the results is available (DoD, 2005). The sound and convincing installation value model supported the Army’s recommendations to the BRAC 2005 Commission, 95% of which were accepted.
Though it has been shown to be very useful at times, there is still an irony in the practice of decision analysis. If the method is resorted to, that indicates that the right decision is not obvious. But if the right decision is not obvious, the alternatives are probably pretty close to each other in terms of real value to the decision maker. Thus, DA is only necessary when it is not really very important. So why do we do it?
One answer is that sometimes the right answer becomes obvious only after some amount of analysis, problem structuring, and data collection. The complete model, additive multiattribute value function or elaborate decision tree, may not be needed. If that happens, and the DM comes to realize that he or she can be confident of the answer without further detailed modeling, that should be considered a great success for decision analysis. The analysis should only proceed as far as necessary to make the correct decision clear.
Another answer is that sometimes it is not simply a matter of convincing the decision maker. The chief requirement may be to make a public, transparent, and well-justified decision. It must be able to withstand criticism from a wide variety of stakeholders, who may be disappointed in the result and looking for grounds to question it. (This was certainly the case in the BRAC example.) In cases like this, one needs a sound decision modeling methodology, consistent with the best practices of decision analysis and also reasonably understandable by non-specialists. This chapter presents the basic methods of widest importance in multiattribute decision analysis in the defense sector. These will cover a large fraction of the problems that military decision analysts will face.