Home Mathematics



Simple Additive WeightingSimple Additive Weighting (SAW) was developed in the 1950s by Churchman and Aclcoff [ChurchmanAckoffl954]; it is the simplest of the MADM methods, yet still one of the most widely used since it is easy to implement. SAW, also called the weighted sum method [Fishburnl967], is a straightforward and easily executed process. Methodology Given a set of n alternatives and a set of m criteria for choosing among the alternatives, SAW creates a function for each alternative rating its overall utility. Each alternative is assessed with regard to every criterion (attribute) giving the matrix M = [m_{(?}] where ггц, is the assessment of alternative i with respect to criterion j. Each criterion is given a weight u>,; the sum of all weights must equal 1; i.e., = 1. If the criteria are equally weighted, then we merely need to sum the alternative values. The overall or composite performance score P, of the it h alternative with respect to the m criteria is given by for i = 1,..., n. Write P = [P,] using matrix/vector notation as where w = [uij]. The alternative with the highest value of P, is the best relative to the chosen criteria weighting. Originally, all the units of the criteria had to be identical, such as dollars, pounds, seconds, etc. A normalization process making the values unitless relaxes the requirement . We recommend always normalizing the data. Strengths and Limitations The main strengths are (1) ease of use, and (2) normalized data allows for comparisons across many differing criteria. Limitations include “larger is always better” or “smaller is always better.” The method lacks flexibility in stating which criterion should be larger or smaller to achieve better performance, thus making gathering useful data with the same relational schema (larger or smaller) essential. Sensitivity Analysis Sensitivity analysis should be used to determine how sensitive the model is to the chosen weights. A decision maker can choose arbitrary weights, or, choose weights using a method that performs pairwise comparisons (as done with the analytic hierarchy process discussed later in this chapter). Whenever weights are chosen subjectively, sensitivity analysis should be carefully undertaken. Later sections investigate techniques for sensitivity analysis that can be used for individual criteria weights. Examples of SAW A Maple procedure, SA W. to compute rankings using simple additive weighting is included in the book’s PSMv2 package. The parameters are the matrix of criteria data for each alternative and the weights, either as a vector or a comparison matrix.
To examine the program’s code, use print {S A IT). Example 8.3. Selecting a Car. It’s time to purchase a new car. Six cars have made the final list: Ford Fusion, Toyota Prius, Toyota Camry, Nissan Leaf, Chevy Volt, and Hyundai Sonata. There are seven criteria for our decision: cost, mileage (city and highway), performance, style, safety, and reliability. The information in Table 8.7 has been collected online from the Consumer’s Report and US News and World Report websites. TABLE 8.7: Car Selection Data
Initially, we assume all criteria are weighted equally to obtain a baseline ranking. Even though the different criteria values are relatively close, let’s normalize the data for illustration. There are three typical methods to use: where Mj and nij are the maximum and minimum values in the jth column, and rankj is the rank order of the jth column. Our 571W program from the PSMv2 package uses the first method. We’ll exclude the cost data from our first baseline ranking since larger cost is worse, whereas all the other data has larger is better. SAW requires consistent criteria ranking. The 571W function will return a matrix of rankings. The first row is a raw ranking, the second row is normalized so the largest ranking is 1. We’ll apply jnormal to the rankings and add a legend to make it easier to read the results. Our rank ordering with equal weighting is Fusion, Camry, Sonata, Prius, Volt, and Leaf. Let’s add cost to the ranking. In order to match “larger is better,” invert cost by c; —> 1/ej, then append this criterion to the data.
Adding cost to the criteria considered changed the ranking to Fusion, Canny, Sonata, Prius, Leaf, and Volt. Only Leaf and Volt changed places. Should cost weigh more than other criteria? A weighting vector can be created from pairwise preference assessments. This technique was introduced by Saaty in 1980 when he developed the analytic hierarchy process that we’ll study in Section 8.4. Decide which item of the pair is more important and by how much using the scale of Table 8.8. If TABLE 8.8: Saaty’s NinePoint Scale
Criterioni is к times as important as Criterionj, then Criterionj is 1 /к, the reciprocal, times as important as Criterioni. Begin with comparing cost to the other criteria.
Make all the pairwise assessments creating an upper triangular comparison matrix.
Note that if cost is 3 (weakly more important) to performance, then performance is 1/3, the reciprocal value, to cost. Use reciprocals to fill in the lower triangle of the matrix, so that if rn_{t}j = k, then mji = 1 /к.
A comparison matrix is a positive reciprocal matrix. This type of matrix has a dominant positive eigenvalue and eigenvector. That eigenvector will be our weighting vector. The SAW program will compute (approximate) the weighting vector from a comparison matrix using an abbreviated power method. We must ensure that the comparison assessments are consistent; i.e., if a is preferred to b and b is preferred to c, then a is preferred to c; according to Saaty’s scheme, compute the consistency ratio CR as a test. The value of CR must be less than or equal to 0.1 to be considered consistent. If CR >0.1, the preference choices must be revisited and adjusted. First compute the largest eigenvalue Л of the comparison matrix. Then calculate the consistency index Cl Now determine the consistency ratio CR = CljRI where RI. the random index (see [Saatyl980]), is taken from
Enter the comparison matrix in Maple, calling it CM. Use the LinearAlge bm package to find the dominant eigenvector of CM is A « 7.392. Thus, the consistency ratio is
Our CR is well below 0.1; we have a consistent prioritization of our criteria. Use SA W once more to find our new rankings.
Preference ranking the criteria changed the result to Camrv, Sonata, Fusion, Prius, Leaf, and Volt. The leaders changed places. Since the importance values chosen are subjective judgments, sensitivity analysis is a must. The sensitivity analysis for this example is left as an exercise. A Krackhardt "Kite network,” shown in Figure 8.1, is a simple graph with 10 vertices that has three different answers to the question, “Which vertex is central?” depending on the definition of “central.” Krackhardt introduced the graph in 1990 as a fictional social network.^{[1]} Example 8.4. Krackhardt’s Kite Network. In the Kite network, Susan is “central” as she has the most connections, Steve and Sarah are “central” as they are closest to all the others, and Claire is “central” as a critical connection between the largest disjoint subnetworks. ORAPRO,^{[2]} ^{[3]} “a tool for network analysis, network visualization, and network forecasting,” returns the data in Table 8.9 for the kite. Use SAW to rank the nodes. After consulting with several network experts and combining their comparison matrices,^{0} we have the weighting vector w = [TC, BTW, EC, INC]
FIGURE 8.1: Krackhardt’s “Kite Network” TABLE 8.9: ORA Metric Measures for the Kite Network
Table Legend: TC  Total Centrality; BTW  Betweenness; EC  Eigenvector Centrality; INC  Information Centrality The consistency ratio for the combined comparison matrix is 0.003; that is well less than 0.1. It’s time for Maple.
The results are easier to parse when we sort the array. The program MatrixSort is in the PSMv2 package with syntax MatrixSort(Matrix, (row/col), (options)). The options are sortby=’row’/’column’ and order=’ascending’/’descending’ with defaults ’row’ and ’ascending’.
We see the resulting rank order for “overall centrality” is
Sensitivity Analysis. We can apply sensitivity analysis to the weights to determine how changes impact the final rankings. We recommend using an algorithmic method to modify the weights. For example, if we reverse the weights for TC (total centrality) and BTW (betweenness), the rankings change to
Susan is still the “top node,” but Claudia and Steve have swapped for second; Bob is now above Claire, rather than tied. Exercises Use SAW in each problem to find the ranking under the weight:
2. Rank order Hospital B’s procedures using the data below.
3. A college student is planning to move to a new city after graduation. Rank the cities in order of besttomoveto given the following data.
LEGEND: Housing Affordability: avg. home cost in S100,000s: Cultural Opportunity: events per month; Crime Rate: ф crimes reported per month in 100’s: Quality of Schools: index in [0,1]. 4. Rank order the threat information collected by the Risk Assessment Office that is shown in Table 8.1 (pg. 339) for the Department of Homeland Security. 
<<  CONTENTS  >> 

Related topics 