7.5 Monitoring data can play a key part in policy evaluation by providing useful data to policy makers and analysts throughout the life of a policy. This can support both the monitoring of the policy as part of its routine management, and its evaluation (see how monitoring and evaluation fit into the ROAMEF policy cycle in Chapter 1).
7.6 Monitoring data are regularly collected about a policy and can include data relating to each component of the logic model (see Chapter 5 for further information on logic models) as summarised in Table 7.A.
| Data | Example | Why collect this data? |
| The people accessing a service | Numbers and characteristics | This can help demonstrate whether a policy is reaching its target population |
| Inputs | Funding or staff numbers | This can inform a cost-benefit analysis and determine whether assumptions about the policy implementation, such as cost and time, were correct |
| Processes / activities | Referrals and waiting times | This can help determine whether the policy is being implemented correctly or whether there are any unintended consequences |
| Outputs | Numbers of people getting job interviews or number of applications processed | This can inform an assessment of whether the programme has delivered the target outputs to the anticipated quality |
| Outcomes | Employment rates and wages | This will help to measure the benefits of delivering the outputs |
7.7 Monitoring data are frequently administrative and quantitative and are often not generated primarily for evaluation. However, this does not stop them from being a very useful resource for analysts, and the availability of this type of data, and whether there is any opportunity to adapt or collect it in a way that best support the evaluation should be considered at the design stage.
7.8 Monitoring data can provide answers to a number of policy, research and performance questions. Monitoring data may form the basis of an impact evaluation if the data is of sufficient quality and allows the estimation of a counterfactual. They also provide information to monitor the progress and performance of a policy from its start and can contribute to a process evaluation.
7.9 With reference to its role in supporting evaluation, monitoring data can be used to collect and measure data relating to:
• the logic model that underpinned the policy (see Chapter 5). Where, for example, an outcome (which may take some time to materialise) is dependent on a sequence of initial processes, if data show that these early stages are or are not happening this will have implications for the confidence policy makers will have in achieving their ultimate objective. For example, where a policy to reduce reoffending is thought to be dependent on an initial process of offenders regularly attending Probation services, and the monitoring data show a low rate of attendance, this data, in conjunction with the logic model may give an early indication that the policy is unlikely to be successful;
• the progress of a policy, programme or project against a set of pre-specified expenditure or output targets. For example the number of client contact sessions with the Probation Service against the target number of contact sessions;
• the numbers and characteristics of people, organisations and businesses accessing or using a policy. For example the demographics of offenders on a reducing reoffending programme and those who drop-out. This can help to determine whether the programme is reaching the target population and whether there are any differences among those that drop-out;
• the contact details of individuals, groups, organisations or agencies that are participating in or are subject to the policy and in some cases, the contact details of a control or comparison group. These can be used to inform the sampling strategies for follow-up research. Alternatively this data may be required to identify individuals on a further dataset, for example, to identify offenders on the Police National Computer to investigate whether they have reoffended;
• the impacts of a policy on central and local government and its agencies, such as hospital admissions and stays; arrests by the police and court prosecutions; enrolments in training course and university places; and use of social services and housing;
• the costs of a policy, this can include costs to other stakeholders, such as businesses or survey respondents, as well as government. For example, monitoring data may collect information on the amount of time Probation Officers spend on client contact sessions which can help calculate the total staff costs of implementing the programme or policy; and
• the economic effects of a policy, through changes in incomes, prices, employment, consumption and other economic measures and indicators of value.
7.10 Analysis of monitoring data can more generally help policy makers identify where a policy is not being implemented as expected and further action is required to ensure it can achieve its objectives. If the monitoring data suggests something is going wrong (such as fewer referrals to a scheme than expected), then policy makers or analysts may want to use an evaluation to check the extent of the "problem" and its reasons to inform contingency actions. Box 7.A provides an example of how monitoring data can be used within an evaluation.
7.11 From this it worth noting that care should be taken to establish the quality of the monitoring data being collected as poor quality or partial data will affect the scope and scale of monitoring data's contribution to an evaluation.
| Free Swimming Programme Evaluation (Department for Culture, Media and Sport) The Free Swimming Programme began in April 2009 and was due to run to March 2011, but finished early in July 2010. It was funded by five government departments and was intended to get more adults, children and young people physically active. Funding was split into four pots: two supporting free swimming - one for 16 and unders, and another for 60 and overs, plus two capital modernisation pots. The evaluation had three main objectives: • to measure changes in swimming participation; • to identify lessons about what works, how, in what context and for whom; and • to estimate the value for money, health and economic benefits of the programme. A programme logic model was developed to provide a structure for the evaluation and guide the research. Evidence to measure the inputs, activities, outputs, outcomes and processes identified in the logic model was collected and analysed through a range of mechanisms: • collection of monitoring data on the number of free swims and free swimming lessons from all 261 local authorities involved in the programme; • analysis of the Active People Survey, a national c. 1 90,000 sample telephone survey which measures participation in sport and physical activity; • a bespoke online survey of 4000 members of the population in the target age groups to assess participation in, and attitudes towards, swimming; • case study visits to a sample of 12 participating local authorities; • telephone interviews with a sample of 18 non-participating local authorities; and • a literature review to assess the health and associated economic impacts of sport and physical activity. A key focus of the analysis was on understanding the net impact of the programme. The key factors that impacted on the estimation of additionaliy2 for this programme were: • the reference case / counterfactual; • deadweight (people who would have swum anyway, even if they had to pay); • displacement / substitution (the extent to which the programme displaced swimmers from outside the target age groups, and how it impacted on participation in other sports); • wider effects (the impact of the programme on paid swims by friends and family members); and • sustainability (the likelihood of those who swam for free continuing to swim after the end of the programme). The main evaluation findings were based heavily on the local authority monitoring data (for measuring gross impact) and the online survey data (for estimating additionality and net impact). There were concerns about data quality of some of the local authority data collection systems, but triangulation allowed an initial analysis for the revenue subsidy part of the programme which suggested that the cost was greater than the benefit (in terms of avoided cost to the health service). The findings of the first annual evaluation report3 informed the government decision to end the programme early in July 2010. |
________________________________________________________________________
2 The number of additional positive outcomes that the programme creates. It equals the number of positive outcomes achieved with the programme minus the counterfactual. It is a measure of the programme effect or impact. See Chaoter 6 for a more detailed discussion.
3 Evaluation of the Impact of Free Swimming, PricewaterhouseCoopers, 2010 ( http://www.culture.gov.uk/ )