Step 3 - Identifying the evaluation objectives and research questions

5.12  The third step in planning an evaluation is to identify the evaluation objectives, and the questions the evaluation will address. The logic model will assist this process by identifying the anticipated inputs, outcomes and impacts. Importantly, the model will also identify theoretical links between inputs and outputs that the evaluation may need to test. When developing the evaluation questions, it is important to assess not only the importance of each question but also how the information will be used. This will help prioritise and determine what is to be evaluated. It will also be necessary to consider what constitutes a proportionate and realistic evaluation given the resources and data available, and what is already known about the policy and its delivery.

5.13  As part of this consideration, when planning the evaluation it is important to decide what the evaluation will add to the existing body of knowledge about what does or does not work. In the case of a new, innovative or pilot policy, this may be fairly obvious. However, in other cases it may be more important for the evaluation to confirm previous results in different contexts, or explore aspects that previous evaluations of similar policies left untouched. In either case, a good understanding of what is already known and the existing evidence base is crucial. If an important question is whether the programme is more effective than similar ones evaluated previously, it will be important to ensure that the evaluation is planned and data collected in such a way as to maximise comparability between the two sets of findings.

5.14  As outlined in Chapter 2, whatever the scope of the evaluation questions, they will normally fall under two broad questions "what difference did the policy make?", or "how was it delivered?" However, it will be necessary to define more specific questions than these; the evaluation questions will be quite specific to the particular policy and logic model. Identifying the evaluation questions is an activity that would normally be undertaken jointly by policy and analytical colleagues. Table 5.B lists a number of issues to consider when developing evaluation questions.

Table 5.B: Issues to consider when developing evaluation questions

What difference did the policy make?

How was the policy delivered?

How will you know if the policy is a success? Which of the outcomes will it be important to assess?

Is it important to understand why the policy does or does not achieve anticipated outcomes?

Do you need to quantify impacts, as well as describe them? How measurable are the various outcomes which might describe the policy's impacts?

Which aspects of the delivery process are innovative or untested?

How complex is the impact pathway/logic model? How important is it to control for confounding factors?

Is it important to learn about uptake, drop-out, attitudes etc.?

What were the impacts for the target group? Do you need data on average or marginal impacts?

What contextual factors might affect delivery (e.g. economic climate, other policy measures, etc.)?

Were there different impacts for different groups?

What process information would be necessary, or useful, for any planned impact evaluation?

How developed is the existing evidence base? Could it enable the scope of the evaluation to be restricted to those areas, impacts or processes where knowledge is most uncertain?

What were the experiences of service users, delivery partners, staff and organisations?

How should the costs and benefits of the policy be assessed? How do the outcomes contribute to social wellbeing, and how do they generate costs?

How complete are current data collection processes? Are the issues to be considered likely to need tailored data collection?

What longer term or wider knock-on effects should be considered? How will you know whether there were any unintended effects?

How was the policy delivered?