“Systematic review” in higher education: the naked emperor
Updated: Sep 26, 2021
Dr Jean Parker, Centre of Excellence for Equity in Higher Education
Here at CEEHE we recently completed a project that had a “systematic literature review” as one of its elements. Since the 1990s such systematic reviews have become prominent methods of policy evaluation, especially favoured by governments. And yet we found profound limitations when we tried to undertake one according to the methods outlined by proponents.
Based on this experience, researchers in CEEHE are looking critically at the evolution of systematic review as a ‘gold standard’ in policy evaluation. We are examining the way that systematic reviewing has been adopted in social science research and policy evaluation in Australia, and the negative impacts this has had on our understanding of what constitutes “evidence” in our field. In this blog we report on our developing argument that systematic reviews produce very little insight into questions of equity in higher education, and should not be recommended for learning about trends in higher education.
Clegg (2005) provides an excellent account of the emergence of systematic review in the UK in the 1990s. Under Tony Blair’s Labour government, systematic review methodologies were promoted as part of a technocratic rebellion against academic expertise. Policy makers wanted the “evidence” of “what works” in order to best target rapidly diminishing public funds. They saw academic experience as a source of “bias” in getting to this evidence. As Clegg puts it: “In recent times we have seen that the argument against professional knowledge is being deployed by policy-makers, influenced by cost considerations.” (Clegg, p.417).
Maclure (2005) also sees systematic reviewing methods as inherently suspicious of and opposed to scholarship and the elements that go into critical academic analysis:
“Systematic review, and the ‘evidence movement’ (Oakley, 2003, p. 23) from which it emerged, thus continually recycles a ‘discourse of distrust’ of education professionals and academics (Torrance, 2004, p. 3). Research, and researchers, are repeatedly reported or implied to be careless, undemocratic, furtive (i.e., prone to ‘hide failures’), biased, incompetent, ‘chaotic’, ‘inward-looking and self-seeking’, ‘methodologically impoverished’ and even potentially life-threatening” (Maclure p.396).
Both authors note the role played by the ‘Evidence for Policy and Practice Information and Co-ordinating Centre’ – or EPPI-Centre (based at the University College London) as a standard bearer and advocate for systematic review. EPPI-Centre applied the tools that medical researchers had developed to derive the best evidence from huge numbers of medical studies (known as metanalyses or systematic reviews) and applied these to questions of social policy. EPPI-Centre developed a detailed and rigid set of methods for systematic review, which they promoted as best-practice in research to inform policy decisions.
However, the nature of the studies and the types of evidence that go into medical meta-analyses are fundamentally different from studies of higher education. Most systematic reviews in science evaluate and synthesise large numbers of randomised control trials. They look for efficacy of a drug or treatment under experimental conditions where there are control groups and therefore counterfactuals.
In contrast, studies looking at the nature of inequity in higher education are performed in particular universities that are emersed in a messy social reality. They happen amidst constantly changing social and individual pressures, all of which can influence student outcomes. The complexity of the social forces at play in any study examining equity policy make it impossible to empirically isolate which factors lead to which outcomes:
“Experimentation in the social sciences and education takes place in open conditions. In most designs in the social sciences and education, if they are not trivial, both the inputs… and the outcomes (e.g. student learning) are complex.” (Clegg p.421)
This does not mean we should abandon attempts to study programs, let alone abandon funding and developing those programs! It means researchers need to rely on critical analysis to make a case as to which factors create which outcomes.
There is a fundamental dissonance between the design of systematic review and the kinds of evidence that developments in higher education provide. Systematic review provides no space for, indeed is hostile to, the analytical work needed to make a case for the key underlying drivers of complex social trends and changes. For all the rigour of its methods, there is a fundamental void when it comes to what social research actually needs – coherent and theoretically informed accounts of how programs can meet the differing needs of underrepresented students. Ignoring this, and forcing systematic review techniques onto higher education researchers represents a waste of resources, as the outcome of these studies are so limited. It also pulls research, and what is understood as “rigour” in a direction that limits our understanding of how to increase equity for underrepresented students.
Dr Jean Parker and Dr Matthew Bunn will be presenting and writing on this work in the coming months.
Clegg, S. (2005). Evidence‐based practice in educational research: A critical realist critique of systematic review. British journal of sociology of education, 26(3), 415-428.
MacLure, M. (2005). ‘Clarity bordering on stupidity’: where’s the quality in systematic review?. Journal of Education Policy, 20(4), 393-416.
We acknowledge and respect the Gadigal peoples of the Eora Nation and the Pambalong Clan of the Awabakal People the traditional custodians of the land on which we work.