Why undertake a process evaluation, action research or case study?

8.18  These three types of research have many overlaps, being able to answer similar research questions and tending to use a similar range of methods to collect data. However, they do have different principal characteristics which are presented in Table 8.B.

Table 8.B: The principal characteristics of process evaluation, case studies and action research

Type of research

Characteristics

Process evaluation

Probably the widest and most flexible of the three types of evaluation. Investigates a number of different research questions to understand and chart the implementation and delivery of a policy. Summation of past activity (whilst still having the aim to influence and improve future practice).

Action research

Interactive and iterative research which is used to influence the development of the policy being implemented. Therefore involves close collaboration between the researchers and policy makers.

Requires commitment from policy makers to swiftly and continuously reflect upon and amend their policies, which may not be feasible with large scale policy implementation.

Case studies

Focussed on smaller scale or more localised aspects of policy delivery providing a level of detail that would be unwieldy if replicated for the full breadth of standard policy implementation. Might be used as part of a wider process evaluation.

8.19 These types of research can also be used in combination to strengthen the evaluation of a policy's implementation and delivery. For example, action research might be undertaken when a policy is initially being implemented to refine its procedures and practice, and a process evaluation could then assess the delivery of the final version of the policy. Alternatively, a case study approach could be used within a process evaluation to provide more detail and in-depth data on a context, area or situation of particular interest.

8.20 Taken together, these types of research may be particularly useful when:

evaluating a new or innovative pilot project where rich data is needed to evaluate what has worked more or less well - including how a policy might be streamlined and made more efficient and how it might be developed in order to be rolled out to a wider audience;

assessing best practice to identify aspects of policy delivery that appear particularly effective or successful in the area(s) being studied and which might provide a model for similar work in similar areas;

identifying how to develop or improve service and policy delivery (for example, the evaluation of the Sure Start children's centres showed that there were barriers to fathers participating fully, and was able to give useful suggestions as to how this could be improved);4

investigating local variation and practice and whether this has a positive or negative influence on implementation;

assessing/ identifying unintended or unforeseen consequences of the policy that might affect the overall impact of a policy; and

conducting an impact evaluation will not be possible or will be severely constrained. This might include a small-scale project where the sample size is too small to support an impact evaluation, a project that is rolled out nationally so there is no opportunity to create a comparison group, or a policy where the impact of interest may not be measureable or cannot be measured until too late in the policy cycle. Monitoring data or process evaluation in these situations could provide descriptive data of performance against agreed targets or outputs and qualitative assessments of efficacy.

8.21 These types of research can also supplement and complement an impact evaluation with rich data to explain the impact (or lack of impact) that has been observed. Evaluation of the implementation and delivery of a policy can specifically help explain why, how and for what reasons policy outcomes occur, whereas impact evaluations tend to focus on what, where and when outcomes occur.

8.22 For example, a process evaluation may identify that a policy has not been targeted correctly, (such as a community service intended for the socially deprived that has actually been primarily accessed by more affluent and established members of the community) which means that the expected outcomes were unlikely to occur. Alternatively, it could explain why the intended recipients of a policy have not engaged with it or why the policy has not met their needs (for example, a service to get people into employment may initially have successful outcomes but if the employment is not suitable for their skills or existing commitments people may resign).

8.23 Importantly, a process evaluation can provide further data to explain differences observed in an impact evaluation. For example an impact evaluation might show more or less impact for different groups of service recipients and a process evaluation or case study could provide insight into their experiences of the policy which might explain these variations in success.

8.24 Process evaluations, action research and case studies can therefore answer a range of policy and research questions and are very flexible and useful analytical tools. As discussed in Chapter 5, as with all evaluations, however, in order to get most benefit from them, it is important for policy makers and analysts to identify what specific information will be needed about a policy at the design stage. This will help identify what type of evaluation will be most appropriate and effective and at what stage(s) data should be collected and analysed. Box 8.B provides an example of a process evaluation.

Box 8.B: Process evaluation example

Evaluation of provision of calorie information by catering outlets (Food Standards Agency)

Provision of nutrition information in catering settings, specifically calorie labelling, formed a part of the previous government's wider programme of activities to tackle a range of diet related public health issues, including obesity. In January 2008, the Food Standards Agency (FSA) launched an initiative beginning with the voluntary provision of calorie information (CI), at point of choice (POC), as the first step to providing consumers with more consistent nutrition information when eating outside of the home.

The aim of the evaluation was to explore the practical implications for the 21 businesses participating in the pilot scheme in setting up and running the scheme and to get an early understanding of consumers' (respondents who took part in group discussions and who indicated that they regularly ate in the types of catering outlets represented by the participating companies) and customers' (respondents who took part in observations and interviews in the participating catering outlets) use and understanding of the scheme, to provide information on what worked and where improvements could be made. A process evaluation approach was adopted and several different methods were used.

Business research

•  39 business interviews (20 Head Office, 19 Outlet Manager) were conducted in person or over the phone depending on businesses' preferences - exploring why the business participated in the research, how they set up the scheme, decisions around display of the CI and how issues during set up and roll out were dealt with.

Customer research and consumer research

•  289 customer interviews across the country in catering outlets; 143 POC observational interviews where behaviours were observed and consumers asked about how they were choosing their food; and 146 post choice interviews shortly after people had made their food choices - explored understanding and use of CI in purchasing decisions and views on presentation of CI.

•  Eight group discussions with consumers in four locations - explored in more detail issues which were raised in the customer interviews.

The main findings of the evaluation were:

•  participating businesses were generally positive about their involvement in the pilot and most set-up issues were overcome with relative ease; there were some concerns about further roll out that would need to be addressed (e.g. ensuring adequate IT systems in place);

•  actual usage of CI was low, but consumers could envisage ways in which CI might be used (e.g. balancing meals); and

•  the capacity and inclination of consumers to use the information, was dependent on three factors: visibility (presentation of CI should ensure that the text stands out so that it is noticed), understanding (additional information, e.g. reference information, is helpful for consumers to interpret CI accurately) and consumer engagement (the use of positive messages when displaying information helped engage consumers with CI). 

The findings from the evaluation were used to develop proposals for a voluntary calorie labelling scheme and these were put out to consultation in early 2010 and have shaped and set guiding principles for the scheme.5




_______________________________________________________________________

4 Fathers in Sure Start local programmes; Lloyd, O'Brien and Lewis, 2003, NESS Research Report 04,DfES; HMSO. http://www.education.gov.uk/publications/




________________________________________________________________

5 An evaluation of provision of calorie information by catering outlets, prepared for the Food Standards Agency, BMRB Social Research, December 2009, http://www.food.gov.uk/