Research and teaching: it’s complicated…

Unlike in medicine, education and research have always had an uneasy relationship. In 2013, Goldacre described how education lags behind other professions in its systematic use of evidence.  You wouldn’t want to be treated by a doctor who wasn’t using the latest medical research, so why would you let your children be taught by a teacher who wasn’t using the latest education research? He also called for more scientific studies of education interventions and the wider argument this article engendered led eventually to the establishment of the Education Endowment Foundation’s Teacher Toolkit, which aims to make accessible a range of evidence in easily digestible formats to teachers, and is now used by the majority of schools in the UK. As well as making existing evidence available and accessible, the EEF have begun to accumulate a body of new knowledge about ‘what works’, by funding a series of Randomised Control Trials (RCTs) which seek to measure the impact of interventions and to share these findings with teachers, in a format that is easy to understand and use.

I’m all for research evidence getting into the hands of teachers, and all of this seemed like a good idea. Teachers can’t access published research journals without paying, and it’s very time-consuming to have to read across several research studies in order to make sense of them as a body of evidence.  Making research summaries publicly available and writing them in plain English is one of the most powerful and useful pieces of work the EEF have undertaken. If this was the main output of the EEF, I would have no concerns about its impact on the teaching profession.

However, I do worry about the EEF’s obsession with RCTs and with the unproblematised representation of the findings of these RCTs on the EEF website. The RCTs the EEF funds are often short-term, for example one or two years in length, and we know that some educational interventions take time to have a significant impact on pupil learning, especially when they are professional development programme that disrupt teachers’ habitual classroom habits: we know that a dip in performance often precedes a subsequent lift when a change is implemented. We also know that early changes to pupil learning behaviours are often predictors of later academic gains, and often the only impact we see of a teaching and learning intervention delivered over a short time frame.

The RCTs focus almost exclusively on impact in terms of achievement in (usually core) subject areas.  This is a very limited way of measuring learning: most of us working in schools would say that learning is so much greater than test results, and so much more important. All the EEF’s research projects have a process evaluation, and for me, these are often way more telling than the headline effect size. These tell us what teachers had noticed about learning, and the impact they felt the intervention would have on their practice in the long term. Why is this data less valuable than pupil test data?  Like Biesta (2010), I question whether the EEF’s ‘what works’ model of evidence-engagement serves to undermine the value of teachers’ professional judgment or what might be called ‘practice expertise’, since it is based on narrow conceptions of teaching and school improvement. If I am teaching in a small rural PRU, with a particular cohort of pupils, the intervention that the EEF says has a zero or a negative impact might be the best intervention for me to put in place with my pupils.

So as school leaders, what should we do when reading EEF research evidence?

  1. Give equal weight to both effect sizes and process evaluation outcomes when reading the EEF website
  2. Rely more on the literature reviews that EEF publishes than the single studies
  3. Educate your staff so that they understand how to read the evidence, and how to be critical about its relevance to their practice.

And finally, get teachers to use evidence to carry out their own classroom research. Godfrey (2017) distinguishes between three approaches to engagement with research evidence for teachers:

  • ‘evidence-based practice’, a passive process in which teaching approaches are based on evidence about ‘what works’ that has been produced by academics;
  • ‘evidence-informed practice’, whereby teachers actively combine evidence from academic research, practitioner enquiry, such as lesson study or action research, and other school-level data;
  • and, ‘research-informed practice’, whereby teachers engage in and with academic and practitioner forms of research, using evidence from both to make changes to their practices.

It is ‘research-informed practice’ that really makes the difference to teaching and learning. Schools who engage in research-informed practice have been shown to benefit in several different ways: increased professionalism (Furlong, 2014), improved attitudes to learning and renewed practice (Cain, 2015; Greany and Maxwell, 2017), improved pupil outcomes (Cordingley, 2015), and school and system performance (Mincu 2013; Supowitz, 2015).

Let’s not allow ourselves to be led by the evidence, let’s shape it ourselves and make it our own.

Comments are closed.