Step 3: Scoring the Options

A3.12  The third stage is to score each option against each criterion on a suitable scale. The approach described here uses a cardinal scale. This means that if Option A is considered to perform three times as well as Option B, then Option A is given a score that is three times that of Option B. Simpler alternatives to cardinality are possible, for example an ordinal scale may be used. This provides a simple ranking of options against each criterion, which enables one to say that Option A is better than Option B, but it does not indicate how much better A is than B. Such an approach may be useful in some circumstances, but a cardinal approach, if sustainable, is more informative.

A3.13  Options are scored against the criteria by reference to a scale, say from 0 to +20. A score of 0 will indicate that the option offers no benefits at all in terms of the relevant criterion, while a score of +20 will indicate that it represents some "maximum" or "ideal" level of performance. Scores between 0 and +20 will indicate intermediate levels of performance. The scale used does not have to be from 0 to +20, but mathematical consistency demands that the same scale is used for all criteria. The meaning of the maximum and minimum score should always be clearly defined and the whole scoring system should be documented clearly in the appraisal report. Group members should have a common understanding of it.

A3.14  To achieve cardinality, the group needs to think carefully about the differences in the scores awarded to the options, and to provide meaningful justification for them. Suppose, for example, that the criterion 'waiting time' refers to the speed of delivery of a particular service, and that options are scored on a scale from 0 to +20. The group has decided that a score of 0 represents a waiting time that is completely unacceptable e.g. 12 months or more; while a score of 20 represents a waiting time at or close to zero. If Option C delivers in 3 months, while Option D delivers in 6 months, then, using the scale as defined, it would be reasonable to award Options C and D scores of 15 and 10 respectively. In another example, where the criterion is 'accessibility' it may be possible to justify different scores on the basis of objective information about differences in distances travelled.

A3.15  The weighted scoring method should not be used to avoid the effort of measuring differences between options in measurable non-monetary units. Nor should it be used to substitute vague subjective judgments of comparative performance for hard measurementThe credibility of the scores depends upon the provision of a rational justification to support them, including measurement where possible. In any case, project sponsors must be able to provide justification for each and every score that is awarded, and SGHD will expect this to be recorded in full detail.

A3.16  Scores should be allocated to all of the options, including the baseline option (i.e. the status quo or 'do minimum'). A common error has been to overlook the baseline, but it is important to include it. However inadequate it may seem, the existing or 'do minimum' level of service will normally impact on the criteria to some extent, and scoring this helps to give a sense of proportion to the scores of the other options, and to compare their performance to that of the current or minimum level of provision.

Example: The health service group scores four options against the criteria as follows:

 

Option P
(Status Quo)

Option Q

Option R

Option S

No. of cases treated

5

10

12

15

Waiting Time

8

12

14

16

Patient access

10

10

15

15

Disruption to services

15

5

5

10