20.7.1 The specification even of relatively straightforward requirements such as those relating to heating or lighting can be difficult and subject to dispute after Contract signature; the appropriate level of contractual detail reflects considerations of practicability and clarity, and Authorities should consider these issues carefully for all elements of the payment mechanism. Objective and well-defined performance criteria should always be used as far as practicable, but other methods of measuring performance may be considered and may offer complementary requirements within the overall mechanism.
20.7.2 In some projects there may be qualitative aspects of performance which it may be difficult to measure objectively but which are nevertheless important to the users of the Service, such as the helpfulness of staff. More generally, the quality of service in a complex setting such as a hospital or school cannot easily be wholly reduced to a practical set of availability and performance criteria.
20.7.3 The most straightforward mechanism to measure this is a regular customer satisfaction survey with deductions for a low or falling score. It is commonly argued that it is difficult to base financial compensation on customer satisfaction surveys because they are based on individuals' perceptions rather than hard measurable facts and so the results may be variable; moreover they may be vulnerable to manipulation. However, surveys are a useful way of monitoring performance, and have been used successfully in a number of projects in a variety of sectors albeit that the maximum deduction is generally a relatively small element of the overall Unitary Charge. Examples include deductions for poor satisfaction by head teachers in schools projects, deductions for low scores on user satisfaction surveys in housing projects, and requirements for the Contractor to carry out a performance audit and prepare a remedial plan in the event of low user satisfaction. The main advantage of such a system is that the feedback obtained can be very useful as an incentive to good Service provision.
20.7.4 Although much of the old Best Value obligations have been repealed, a Local Authority (and other Best Value bodies) still has a general Duty of Best Value to "make arrangements to secure continuous improvement in the way in which its functions are exercised, having regard to a combination of economy, efficiency and effectiveness." This would commonly involve the making of periodic user satisfaction surveys to compare the quality of service under the Contract against the quality of comparable services elsewhere. The following recommendations are designed to sit alongside the Local Authority duties and apply independently for central government PF2 Contracts.
20.7.5 A variety of mechanisms have been used successfully in the past to reflect the importance of qualitative factors in the payment mechanism.8 All projects9 should as a minimum include provision for regular user satisfaction surveys (at least annual albeit that they may be conducted on a rolling basis), to be paid for by the Contractor. These would usually be carried out by the Contractor, or an independent third party (under contract to the Contractor).10 In the former case, the Authority should have the option to commission its own survey from an independent third party in the event of its dissatisfaction with the Contractor's own survey, such option to be exercised at the expense of the Authority and such survey to take precedence over the Contractor's survey unless the Authority agrees otherwise. The intention is that even if the results of the survey have no direct financial impact, this information is useful management information for both the Authority and Contractor. Failure to carry out the survey should itself trigger a penalty under the payment mechanism;
20.7.6 Suitable drafting for the conduct of such a survey is set out below:
20.7 Customer Satisfaction Survey
The Contractor shall, on each Customer Satisfaction Survey Date undertake (or procure the undertaking of a customer satisfaction survey (the Customer Satisfaction Survey), the purpose of which shall include:
a) assessing the level of satisfaction among Service Users with the Services (including the way in which the Services are provided, performed and delivered) and, in particular, with the quality, efficiency and effectiveness of the Services;
b) assisting in the preparation of the Contractor's Annual Service Report and Annual Service Plan; and
c) monitoring the compliance by the Contractor with the Services Specification.
The Customer Satisfaction Survey shall be undertaken in accordance with Part [ ] of [Schedule 5] (Payment mechanism).
"Customer Satisfaction Survey"
has the meaning given to it in clause 20.7.1 Customer Satisfaction Survey;
"Customer Satisfaction Survey Date"
[INSERT DATE OF FIRST SURVEY] date and each anniversary thereof during the Contract Period;
"Contractor's Annual Service Report"
means a report to be prepared by the Contractor each year reporting on service delivery including any performance failures and deductions incurred in the previous year
means a report to be prepared by the Contractor each year identifying and setting out a plan for improvement in the delivery of Services to be implemented during the forthcoming year
20.7.7 A requirement for production of a remediation plan, by the Contractor at their expense, in the event of low satisfaction. This plan should set out the Contractor's view of why satisfaction is measured as low in the survey, their planned actions to improve it insofar as it (in their view) relates to their performance, and their proposals for assessing the effectiveness of these actions (for example, inclusion of related questions in the next survey). "Low" satisfaction should be defined in the Contract where possible, but it is acceptable to set it for an initial period and provide for review after, say, five years of operations. The production of a plan clearly requires some management time and therefore it is a form of indirect financial cost for the Contractor, but it is intended primarily as a device to ensure that issues with user satisfaction are taken seriously by the Contractor.
20.7.8 As regards sanctions for poor user satisfaction, Authorities should consider the following potential approaches:
• immediate direct financial deduction (for example, a set amount per percentage point short of a pre-agreed base-line, which might remain constant or be adjusted to reflect obligations for continuous improvement). The design of such deductions should be subject to value for money evaluation and are likely to be fairly modest;
• the remediation plan discussed above could be connected to deductions should its adoption have no effect on user satisfaction;
• low satisfaction could require the Contractor to carry out a performance audit (at the Contractor's expense) in relation to the mechanism more widely. In effect, this represents using poor satisfaction ratings as a tool to ensure rigorous application of the "standard" elements of the payment mechanism, and it is an indirect approach to giving financial effect to poor user satisfaction; and/or
• linking deductions to complaints or to calls to a help-desk.
20.7.9 Authorities should consider whether to deduct from the Unitary Charge for poor satisfaction or reward for out-performance, or both. Payments linked to user satisfaction might be an area where reward for out-performance does have merit (see Section 19.2.4) in which case the incentive could work both ways.
20.7.10 As with other elements of the payment mechanism, Authorities should agree the detail of the measurement process and the financial implications before the appointment of the winning bidder (for example, they should agree the design and content of any questionnaire, required scores, sample size/identification process, and the details of who is going to carry out the survey, how and when).
20.7.11 The best approach to choice of user groups for satisfaction surveys will depend on the particular project. There are a range of different "users" in most projects, from the contract management function within the Authority (e.g. in a local authority), local management of the facility (e.g. head teacher), to operational staff (e.g. teachers) and wider stakeholders (e.g. pupils or their parents). Either party may be more comfortable with surveying some groups than others, depending on their relationships with the parties involved.
20.7.12 It may in some cases be value for money to measure outcomes from the Service as a whole, which reflect the performance of public-sector staff and Contractor staff together, e.g. health or educational outcomes compared to an appropriate comparator group. This moves away from a focus on the Contractor's activities but is more objective, albeit that it may be more appropriate for payments for out-performance than deductions.
20.7.13 Authorities can calibrate user satisfaction requirements against pre-PF2 performance where possible (e.g. for a refurbishment project), or results from similar projects (e.g. those run by the same project sponsors or Authority).
______________________________________________________________________________________________________________
8 The use of user satisfaction is, however, not intended to cut across use of the Authority change procedure where real service specification changes are needed.
9 Unless they are of an exceptional nature, where the Authority is satisfied that there is no meaningful way in which a customer satisfaction survey can be created.
10 These options both assume that the respondents will be the actual users. Another suggestion is to use "mystery shoppers". A mystery shopper is a qualified independent individual used to test aspects of the Service; this reduces subjectivity as they will apply the same standards throughout. The routine use of external organisations, including mystery shoppers, is likely to have cost implications which may undermine value for money in all but the largest projects.