6.16 Establishing what is already known about a policy intervention presents a major challenge for knowledge management. In the first place, the sheer amount of potential research evidence makes it almost impossible to keep abreast of the research literature in any one area. Second, research and information is not of equal value. Some way of differentiating between high and lower quality studies, as well as relevant and irrelevant evidence, is required.
6.17 Systematic review is a tool which can be characterised by:
• a clearly stated set of objectives with pre-defined eligibility criteria for studies;
• an explicit, reproducible methodology;
• a systematic search that attempts to identify all studies that meet the eligibility criteria;
• a formal assessment of the validity of the findings of the included studies; and
• a systematic presentation, and synthesis, of the characteristics and findings of the included studies.
6.18 Systematic reviews therefore differ from other literature reviews by following an explicit protocol for identifying and assessing relevant studies. For instance, the protocol might specify what reference databases were searched, what search terms were used, and what criteria were used to filter studies and select those for detailed review. In general, the review of those studies which are selected will be qualitative (although systematic review can be combined with other evaluation techniques, such as meta-analysis). The basic principles of systematic review are set out in Box 6.F.
6.19 The Campbell Collaboration (http://www.campbellcollaboration.org ) provides extensive guidance on undertaking systematic reviews. The Centre for Evidence-Informed Policy and Practice in Education (the EPPI-Centre) (http://eppi.ioe.ac.uk/ ) at the Institute of Education undertakes and commissions systematic reviews in education, and is developing methods for undertaking systematic reviews of social science and public policy research. The Economic and Social Research Council (ESRC) has established an Evidence Network (http://www.evidencenetwork.org ).
Box 6.F: The principles and practice of systematic review Defining an answerable question A systematic review should address a question that clearly specifies the interventions, factors or processes of interest, the population and/or sub-groups in question, the outcomes that are of interest and the context in which they are set. The question needs to distinguish whether the interest is in the outcome of an intervention or in the implementation of a policy. ______________________________________________________________________ An example of an answerable question about a policy intervention might be: What is the effect of a personal adviser service (intervention) in terms of retaining (outcome 1) and advancing (outcome 2) lone parents (population) in the UK workforce (context)? An example of an answerable question about implementation might be: What are the barriers to (factor/process 1) and facilitators of (factor/process 2) getting lone parents (population) to participate (outcome 1) and advance (outcome 2) in the UK workforce (context)? ______________________________________________________________________ Systematic searching for studies Systematic reviews differ from traditional reviews in the comprehensiveness and procedural formality of searching for all of the available research evidence. It counters problems of selection bias that come from only identifying studies that are readily accessible, or that are only published and indexed in major databases. It also helps reduce publication bias, which comes from the tendency for there to be a higher probability that studies that report positive (or in some cases negative) results are published. Systematic searching involves electronic sources, print sources, and the "grey" literature. Methods of systematic searching There are at least two methods of systematically searching for potential studies for a review: • searching by all methodology types yields studies that are more sensitive to the overall literature on the topic in question. However, this method of searching may identify studies that have less relevance (i.e. low specificity); or • searching by specific methodology types yields fewer studies but these may be more relevant. (i.e. less sensitivity). Searching by specific methodologies might therefore save time and resources but at the expense of introducing possible selection bias into the review. Critical appraisal Critical appraisal is an essential part of a systematic review. Explicit and transparent criteria are used to determine the quality and strength of the identified studies, and hence the weight attached to their findings. Studies which do not meet sufficient quality standards can be rejected. Example criteria that could be used to appraise studies using experimental designs are set out below: ______________________________________________________________________ • Question focus: was a clear and answerable question asked? • Population/groups studied: were the populations and subgroups studied clearly reported, and was the sample size adequate? • Selection bias: was there any selection bias in the achieved sample, and if so, was it effectively accounted for? • Performance bias: were the trial and control groups treated similarly other than through the intervention? • Statistical methods and reporting: were the statistical tests used appropriate to the questions beings asked, and were they reported adequately enough to permit validation and review? ______________________________________________________________________ Data extraction and organisation A data collection form should be developed recording how, and why, data are to be extracted from named studies. A non-exhaustive and non-prescriptive list is set out below. The data that are relevant to the question being asked should determine the type of data extraction and organisation form which is appropriate. ______________________________________________________________________ • The nature of the interventions or processes studied; • the studies' characteristics and methods used, the research design and analytical methods employed; • the participants (populations and sub-groups) included and excluded; and • the outcomes or processes measured/observed, and the main and subsidiary findings. ______________________________________________________________________ Analysis of data from sifted studies The analysis of data from sifted studies will depend on the policy question(s) being asked, the type of methodology used in the primary studies, and the likely use to which the findings are to be put. Some issues to be considered in the analysis of included studies are suggested below: ______________________________________________________________________ • the appropriate comparisons (if any) to be made by the analysis, and the basis (study results) for making them - these might require some transformation or manipulation to make them comparable; • the assessments of validity to be used in the analysis, and the analytical approaches to be used for making comparisons and summarising results, including meta-analysis; • how heterogeneity/homogeneity of included studies will be dealt with; and • the main findings of the review and the main caveats associated with the findings. ______________________________________________________________________ Summary and conclusions Summary answers should be provided by a review as well as detailed analysis and conclusions to the policy question(s) being addressed. The review should be as clear as possible about what can and cannot be concluded from the existing evidence. It should also identify any weaknesses or limitations in the existing evidence on the topic in question. Finally, the conclusions of systematic reviews become less relevant over time as existing studies age and new studies become available, so any review should be dated and it's most recent update noted. |