文档库 最新最全的文档下载
当前位置:文档库 › 一篇关于用AHP方法完善平衡积分卡的研究文章

一篇关于用AHP方法完善平衡积分卡的研究文章

C

ompanies using the balanced scorecard have achieved mixed success. We believe the dif-ficulty lies with the process of choosing met-rics and using them appropriately. T o solve the dilemmas of how to systematically

choose appropriate metrics and how to compare divi-sions with differing metrics, we suggest using a tool called the Analytic Hierarchy Process (AHP).

The AHP is a unique method that provides a means of choosing appropriate metrics in virtually any context.It uses a hierarchical approach to organize data for prop-er decision making. The method is easy to implement,simple to understand, and able to satisfy every require-ment of metric choice and scorecard construction. The process can neatly capture the consensus of a potential-ly divergent group of managers and can be quickly and easily updated as desired. The AHP is a powerful yet simple tool that provides great promise toward imple-menting the balanced scorecard for practical use.

B A L A N

C E

D S C O R

E C A R D C O N C E P T

In the September 1997 issue of Management Accounting (now Strategic Finance ), B. Douglas Clinton and Ko

Cheng Hsu illustrated the importance of the balanced scorecard concept developed by Robert Kaplan and David Norton.1Since the publication of Kaplan and Norton’s 1992 Harvard Business Review article, compa-nies have implemented the balanced scorecard with mixed results.2They seem to have a difficult time choosing the proper metrics and then using them appropriately.

“The point to using the balanced scorecard as a man-agement tool is not to adopt a specific set of metrics by cloning them from a particular list,” wrote Clinton and Hsu.3“The idea is to analyze each of these components (relationships and management perspectives) and con-sider how they link to strategy and link together to sup-port a meaningful continuous improvement and assessment effort.”

The balanced scorecard concept is the balanced pur-suit of objectives in four key areas: customers, financial, inter-nal business processes, and innovation and learning . The balanced scorecard provides summary-level data

emphasizing the metrics most critical to the company in each of the four areas only. Kaplan and Norton are quick to point out that success in using the balanced

Implementing the Balanced Scorecard Using the Analytic

Hierarchy Process

T HOMAS S AA TY ’S

INNOVA TIVE TECHNIQUE PROVIDES A SYSTEMA TIC PROCESS

FOR CHOOSING METRICS .

B Y B . D O U G L A S

C L I N T O N , P H .

D ., C P A ; S A L L Y A . W

E B B E R , P H .D ., C P A ;

A N D J O H N M . H A S S E L L , P H .D ., C P A

scorecard is highly contingent on the ability of the cho-sen metrics to encourage action and appropriate change for the company.

In Kaplan and Norton’s second balanced scorecard article, they emphasized the importance of tying the metrics to strategy.4In their discussion, they presented a typical project profile for building a balanced score-card, but they showed that the ways in which particular companies use the balanced scorecard often differ con-siderably, as do the benefits derived from the different approaches.

In more recent years, the balanced scorecard has grown into a tool used as a basis for developing a strate-gic management system. By observing how companies applied the balanced scorecard, Kaplan and Norton rec-ognized how useful the approach can be in coupling strategy development with strategy implementation. Different methods of employing the balanced scorecard are used to clarify and update strategy, to communicate intentions, to align organization and individual goals, and to learn about and improve strategy. It is this very diversity in choosing metrics, however, that provides the central focus of problems in implementing and using the balanced scorecard. Unfortunately, no one really knows the best way to choose the metrics.

S YS T E M AT I C A L LY C H O O S I N G M E T R I C S

How should a company choose the metrics to use in each of the four categories? This question is made more difficult because it is almost certain that different met-rics are appropriate for different divisions within a com-pany. Although a useful goal is “balance,” when, if ever, does one of the four areas become more important than another? If one of the four areas is treated as more important than another, how will the relative impor-tance of the four areas be determined and communicat-ed? How should top management go about meaningfully comparing divisions that are using differ-ent metrics and weight the relative importance of the four areas? Will managers earning performance-based compensation tied to their individualized balanced scorecard perceive they are being treated fairly when compared to managers of other divisions? With each division using different metrics for bonus calculations, how can a company simplify the compensation process and provide important motivational incentives yet ensure the process is equitable and congruent with the overall goals of the company as a whole?

T o solve the dilemma of implementing the balanced scorecard based on a method of systematically choosing metrics, several important issues need to be addressed and satisfied. For example, the scope of the method must include the following requirements:

?Handle subjective, qualitative judgments as well as quantitative data simultaneously;

?Be used with any metric—financial as well as non-financial, historical as well as future oriented—and allow comparison between categories;

?Be accurately descriptive in capturing what metrics users believe are critically important yet not dictate to them what should be important;

?Be easy enough for anyone involved in choosing metrics to use;

?Handle direct resource allocation issues, conduct cost/benefit analysis, and be used in designing and optimizing the strategic management system;

?Remain simple yet effective as a procedure, even in group decision making where diverse expertise and preferences must be considered;

?Be applicable in negotiation and conflict resolution by focusing on the relationships between relative benefits and costs for each of the parties involved;

?Be relevant and applicable to a dynamic, constantly changing organization;

?Be used for guiding and evaluating strategy, and for establishing a basis for performance evaluation and incentive-based compensation;

?Be mathematically rigorous enough to satisfy strict academicians and mathematically oriented practition-ers and compensation specialists.

The tool that can satisfy all of these requirements is the Analytic Hierarchy Process (AHP). In fact, the issues listed above were adapted from the seminal book writ-ten by the creator of the AHP, Thomas L. Saaty, explaining what AHP can do.5

P R O B L E M S C H O O S I N G M E T R I C S

Perhaps the most candid discussion of the successes and failures of the balanced scorecard was presented in

1999 by Arthur M. Schneiderman, former vice president of quality and productivity at Analog Devices, Inc. He was responsible for the development in 1987 of the first balanced scorecard that the company used, and he offers advice from his 12 years of experience as a “bal-anced scorecard process owner.”6He states that “a good scorecard can be the single most important manage-ment tool in Western organizations,” but “the vast majority of so-called balanced scorecards fail over time to meet the expectations of their creators.”7 Schneiderman lists six reasons why most balanced scorecard implementations fail (see T able 1). First on the list is the incorrect choice of variables as drivers of stakeholder satisfaction. In addressing this issue, Schneiderman states:

The difficulty in identifying scorecard metrics

is compounded by the emerging requirements

of nonowner stakeholders: employees, cus-

tomers, suppliers, communities, and even

future generations.8

Schneiderman mentions how difficult it is for organi-zations to clearly identify social responsibilities. At least part of the problem is the subjectivity involved. The subjectivity issue is also likely to create difficulties in

defining the metrics. Even so, Schneiderman appears to recognize that value creation takes place at the activity level and recommends pushing metrics downward to lower organization levels. He argues that:

There is great value in even subjective agree-

ment that if all of the goals of subordinate

scorecards are achieved, then a higher-level

goal will also be achieved, almost with

certainty.9

Schneiderman is not the only balanced scorecard advisor to lament the difficulty with metric choice and the problems that result. About the same time, Mark Frigo and Kip Krumwiede published a study that rated performance metrics in the balanced scorecard frame-work. After examining survey data, they concluded that scorecard users rated about a third of customer and internal process area metrics as between “less than ade-quate” and “poor.” In addition, “Only 16.8% rated cus-tomer metrics as ‘very good to excellent,’ and only 12.3% said their internal process metrics were ‘very good to excellent.’”10

Like Schneiderman, Frigo and Krumwiede did not conclude that the balanced scorecard was a poor method or an inadequate tool. In fact, they concluded that firms using a balanced scorecard for performance measurement rated their performance measurement systems much higher on average than other compa-nies.11Thus we see even more evidence that choosing and properly defining metrics is a difficult task. More-over, the vast majority of companies that have perfor-mance management systems are dissatisfied with their metrics—including those companies that employ the balanced scorecard. Clearly, a systematic process is needed to help companies choose metrics that mea-sure results appropriately. This is where the AHP

can help.

T H E AHP M E T H O D

According to Saaty, the AHP is “a framework of logic and problem solving” that organizes data into a hierar-chy of forces that influence decision results.12The method is mathematically rigorous yet easy to under-stand because it focuses on making a series of simple paired comparisons. These comparisons are used to compute the relative importance of items in a hierarchy. Accordingly, the AHP can help decision makers under-stand which metrics are more important by indicating which metrics weigh more heavily on a decision than others.

Managers typically will not use a method that they do not understand, and if they are forced to use such a method, they will likely not be happy about it. Unfortu-nately, the best methods (e.g., most accurate or useful) are often the most complex as well. In contrast, the AHP is simple, adaptable to groups and individuals, natural to intuition and general thinking, encourages compromise and consensus building, and does not require a great deal of specialized knowledge to master. The AHP is able to deal with the subjectivity involved with attempting to define vague, judgmentally based metrics such as those associated with social responsibili-ty. When a large number of metrics are possible, the AHP is able to help decision makers simplify the num-ber of possible metrics.

Here is a general description of the AHP pertaining specifically to choosing balanced scorecard metrics.13 The AHP uses a series of paired comparisons in which users provide judgments about the relative domi-nance of two items. The term “dominance” here is a greater-than type of measure for a given attribute or quality. Dominance can be expressed in terms of impor-tance, preference, equity, or some other criterion that causes the individual to believe that one item is better or more appropriate than another. This criterion need not be specified or readily definable to use the method; that is, the dominance decision could be based on expe-rience, intuition, or some subjective judgment that per-haps the user cannot articulate.

The AHP assumes hierarchical relationships without compromising the relative balance that the balanced scorecard intrinsically encourages. In a balanced score-card task, the first level of the hierarchy contains the four balanced scorecard categories. The second level of the hierarchy contains the metrics that are used to mea-sure performance for each of the four categories.

The AHP can be used in two ways to implement a balanced scorecard: (1) at the beginning of the process, to help choose metrics, and (2) after the metrics are chosen, to help understand their relative importance to a firm’s managers and employees. We illustrate both in the following sections.

AHP A N D M E T R I C S E L E C T I O N

Determining which metrics to use in a balanced score-card requires selecting the set of metrics for each of the four areas mentioned earlier: customers, financial, inter-nal business processes, and innovation and learning. The first decision is to choose who will be involved in selecting the metrics. It is likely that different person-nel should be used in the metric identification process for each separate area. The participant selection process also requires deciding which levels in the organization will be used to select the metrics (e.g., plant level, divi-sion level, corporate level, or all three). The number of participants should be large enough to ensure that a suf-ficient number of potentially appropriate metrics are identified but small enough that the metric selection process does not become unwieldy. The metric selec-tion process in each of the four areas can proceed either simultaneously or one area at a time.

Participants initially should be encouraged to brain-storm and use their experiences and expertise to identi-fy all possible metrics in each area. After they identify the set of possible metrics, the next step is to reduce the list to a smaller number of metrics. If possible, all participants should meet in person or via conference call to discuss which metrics are most important.

After the list has been whittled down, the AHP facili-tator uses the reduced set of metrics and has partici-pants respond to the AHP task. Each participant is asked to compare all possible pairs of metrics in each of the four areas as to their relative importance using a written survey. From the participants’ responses, the AHP facilitator computes a decision model for each par-ticipant that reflects the relative importance of each metric (the decision weights sum to 100%).14After the AHP decision models are obtained, participants are giv-

en the decision models of the other participants and asked to reflect on their original metric choices. Then the group meets again to determine the final set of met-rics for the scorecard.

An important feature of the AHP is that it produces an inconsistency index for each participant’s decision model. AHP software points out where an individual’s responses in making paired comparisons are inconsis-tent. The participant can then rethink the relative importance of possible factors.

Another strength of the AHP is that it keeps some possible metrics from being discarded prematurely. For example, assume that 12 participants identified seven possible metrics related to internal business processes. After the AHP task, assume that nine of the 12 partici-pants rated metric number three as not important, but the other three participants rated the metric as very important. These decision weights would presumably cause much discussion among the participants. Though most of the participants initially believed that metric number three was not important, the AHP might sug-gest it be included in the balanced scorecard because of its relative importance to the

other participants.

Once the balanced score-

card metrics are chosen, a

balanced scorecard hierarchy

is established. This hierarchy

should be revisited during

the strategic planning process

and adjusted commensurate

with changes in strategic

direction and operational tac-

tics. Figure 1 presents a

hypothetical structure for a

balanced scorecard that

comes from using the AHP.

The strategic objective on

which this balanced scorecard is based is success in pursuing a differentiation strategy. Presenting an over-all strategic objective on the face of the scorecard serves as a reminder and reality check. For those ulti-mately using the scorecard, this information can be helpful to keep in mind in the final evaluation of the chosen metrics. As shown, the four scorecard categories are not required to have a prescribed number of met-rics per category or the same number of metrics across categories.

The two-step procedure used to determine the over-all performance involves (1) using the AHP to compute decision weights of relative importance for each metric and (2) using an algorithm to compute an overall perfor-mance score. The following example illustrates these two steps in detail.

T H E A N A LY T I C H I E R A R C H Y P R O C E S S

Step one uses the AHP to compute decision weights of relative importance for each metric. While a relatively small number of individuals may have participated in the metric selection process, determining how to weight the relative importance of the categories and metrics can involve as many individuals as desired from all lev-els within the organization. These individuals proceed through the AHP paired-comparison procedure, first comparing the relative importance of the four balanced scorecard categories in the first level of the AHP hierar-chy. The respondents may want to consider the current

product life-cycle stage when making this determina-tion. For example, while in the product introduction stage, formalizing internal business processes may be of considerable relative importance, but when dealing with a mature or declining product, the desire to mini-mize variable cost per unit may dictate the financial cat-egory to be of greater importance than the three other

scorecard categories. An illustration of the paired com-parison process for level one—the category level—is presented in Figure 2.

For the second level of the AHP hierarchy—identifying the relative importance of the metrics for the respective scorecard categories—the process is identical to that used for level one. Paired comparisons are made between all combinations of the metrics pro-posed within each scorecard category. A sample survey instrument showing a complete set of questions for

both levels is presented in Figure 3. By examining the scales you will notice that all possible paired compar-isons are made. For level one, this comparison includes all paired combinations of the four scorecard categories. For level two, the comparisons are between each paired combination of three metrics proposed for each of the respective four scorecard categories.

In Figure 3, we use three metrics in each balanced scorecard category. The choice of three metrics and the specific metrics chosen are merely examples. Limiting

our choices to three metrics for each category is simply for convenience in using a small number of paired com-parisons in our illustration, and the number of metrics in each category is not constrained to three. As shown in Figure 1, the number of metrics can vary across cate-gories. Potential metrics can be obtained through benchmarking processes, brainstorming sessions, or oth-er methods such as historical analysis. For companies already using scorecards, their existing list of metrics is

mance of divisions that use different sets of metrics. Because the relative AHP weights for each set of met-rics sums to 1.00 (or 100%), the company could devise a performance measurement system that makes use of the weights to derive an overall performance score. For example, a division might be rated on each metric as excellent (1.00), good (0.75), fair (0.50), or poor (0.25), and those ratings then could be weighted using the AHP model. A division that received excellent ratings for all scorecard metrics would thus have an overall per-formance score of 1.00 (or 100%). Computing an overall performance score would work to alleviate the tendency of managers to ignore some metrics. Marlys Lipe and Steven Salterio showed that when balanced scorecards included different metrics for each division, managers who were asked to evaluate the various divisions seemed to consider only the metrics that were common to all divisions’ scorecards. They also did not appear to include or consider the information related to unique scorecard metrics.16

S T R E N G T H O F AHP M E T H O D

The AHP approach is well suited to implementing the balanced scorecard process. It can be used to help com-panies choose appropriate metrics and derive weights of relative importance for both categories and metrics. The AHP is able to satisfy the requirements of metric choice and scorecard construction mentioned earlier. The method is simple to understand and easy to imple-ment, and a relatively large number of possible metrics could be included. Even if the decision maker has a good understanding of the metrics and their features, the task of narrowing this list can get confusing when trying to make trade-offs among many metrics at once. As the AHP always makes comparisons in pairs, this task is reduced to a manageable level.

The AHP easily handles qualitative and quantitative metrics simultaneously while incorporating subjective elements of the choice process that may perhaps be so deeply latent to the respondents’ underlying thought processes that the respondents are unable to articulate them. The AHP also is able to neatly capture the con-sensus of a potentially divergent group of managers and can be quickly and easily updated as desired. These facets make it a powerful yet simple tool for choosing balanced scorecard metrics. Using AHP is likely to lead to better balanced scorecard outcomes.■

B. Douglas Clinton, Ph.D., CPA, is an associate professor of accountancy at Northern Illinois University. He can be reached at (815) 753-6804 or clinton@https://www.wendangku.net/doc/898357087.html,.

Sally A. Webber, Ph.D., CPA, is an associate professor of accountancy at Northern Illinois University. She can be reached at (815) 753-6212 or swebber@https://www.wendangku.net/doc/898357087.html,.

John M. Hassell, Ph.D., is a professor of accountancy at Indi-ana University, Indianapolis. He can be reached at (317) 274-4805 or jhassell@https://www.wendangku.net/doc/898357087.html,.

1 B. D. Clinton and K. C. Hsu, “JIT and the Balanced Scorecard:

Linking Manufacturing Control to Management Control,”

Management Accounting, September 1997, pp. 18-24.

2 R. S. Kaplan and D. P. Norton, “The Balanced Scorecard—-

Measures that Drive Performance,” Harvard Business Review, January-February 1992.

3 Clinton and Hsu, Management Accounting, p. 24.

4 R. S. Kaplan and D. P. Norton, “Putting the Balanced Scorecard

to Work,” Harvard Business Review, September-October 1993.

5 T. L. Saaty, Fundamentals of Decision Making and Priority Theory

with the Analytic Hierarchy Process, RWS Publications, Pittsburgh, Pa., 1994.

6 A. M. Schneiderman, “Why Balanced Scorecards Fail,” Journal

of Strategic Performance Measurement, January 1999.

7 Schneiderman, Journal of Strategic Performance Measurement, p. 6-7.

8 Schneiderman, Journal of Strategic Performance Measurement, p. 7.

9 Schneiderman, Journal of Strategic Performance Measurement, p. 9.

10 M. L. Frigo and K. R. Krumwiede, “Balanced Scorecards:

A Rising T rend in Strategic Performance Measurement,”

Journal of Strategic Performance Measurement, February-March 1999, p. 44.

11 Frigo and Krumwiede, Journal of Strategic Performance

Measurement, p. 44.

12 Saaty, Fundamentals of Decision Making and Priority Theory with

the Analytic Hierarchy Process, p. 5.

13 The method described reflects a sample of how the AHP could

be used in choosing metrics. Any mistakes in interpretation of the method or its use are the sole responsibility of the authors of this article and do not necessarily represent the endorsement of the creator of the AHP, Thomas L. Saaty.

14 Software used to compute the AHP decision weights is provid-

ed by Expert Choice, Inc., Pittsburgh, Pa.

15 Expert Choice, Inc., Pittsburgh, Pa., 1992.

16 M. G. Lipe and S. E. Salterio, ”The Balanced Scorecard:

Judgmental Effects of Common and Unique Performance

Measures,” Accounting Review, July 2000, pp. 283-298.

相关文档
相关文档 最新文档