Where do we find evidence about learning?

You are here >> Home >> Academic Support >> Evidence about learning

paradigms - move to positivism - detractors from - other sources - what do you think? - resources
What are the paradigms?
Is teaching an art or a science? There has been an onging debate over time about the nature of teaching and how we find evidence. We could look at this over a spectrum:
Evidence comes from wider research ---- Evidence comes from our own practice
and the types of research themselves might be quite diverse:
Evidence comes from a scientific method ---- Evidence comes from interacting with people

which is sometimes described in the literature on research methodologies as research paradigms:

Positivism - Interpretivism - Criticality - Pragmatism

These have been the core research approaches to exploring education - but have they "soled" the problem about how to undertake good teaching and learning - have they provided us with set of "tools for teaching" and allowed to to know "what works?" - is this possible?

Whenever we are considering evidence we need to be aware of the biases we will display; consciously or unconsciously. These biases are formed from out likes and dislikes, our experiences and our circumstances and opportunities and we cannot elimate bias only become more aware

As researcher and as presenters of our resewarch we also should be aware of the "logial fallices" that are presented in arguments. This poster gives a good synopsis of some of these that we should watch for in others, but also should watch for in oursleves. There is overlap with the biases above.

We should also consider the nature of the data we are collecting, or that we are considering there are two key questions to ask abot the data:

  • Is it valid: Are the findings genuine do they tell us the answer the question(s) we are asking? - more on this
  • Is it reliable: If the study were to be done a second time, would it yield the same results - more on this
back to top
What is evidence? - The move towards positivism

Coe (1999) says that evidence based is "more than trendy jargon" it is an approach which says that policy and practice should be capable of being justified in terms of sound evidence about their likely effects, education may not be an exact science but it is too important to be determined by unfounded opinions. Coe argues that the, "only worthwhile kind of evidence about whether something works is to try it out". We need to be careful to distinguish between the correlational effects (e.g. that children do as well in large and small classes) and the causal reasons (that smaller groups tend to be the more difficult students) so that we do not reach the wrong conclusions He advocates in this paper that it is only by doing an experiment in which children are randomly assigned to classes of different sizes that we could truly separate out class size from the other factors. He argues that idea of evidence is problematic and that some of the issues are:

  • Evidence is not value free
  • There are no universal solutions or quick fixes
  • Evidence is often incomplete or equivocal
  • Evidence can be complex

There has been a move in the last few years towards the idea of "evidence based practice" rooted in the positivism methodology taken from science research and informed by changes in practice in medicine over the last 20 years. This is exemplified by Ben Goldacre who was commissioned to write a paper for the Department for Education in 2011. This paper supported the positivism approach - though it acknowledge some of its deficits, and the importance of a wider qualitative approach but strongly promoted the idea of Randomised Controlled Trials (RCTs) as the 'gold standard'which has been taken up by the Department for Education in its funding for the Education Endowment Foundation (EEF)

Slavin (2008) asks how we synthesis the range of evidence into results which produce, "reliable, unbiased and meaningful information". Using a range of data he explores some of the efforts to do this including those of the world's largest programme the 'what works Clearinghouse' in the US and the 'Evidence for Policy and Practice Information and Clearing Centre' (EPPI) in the UK. He argues that randomised assignments produce the highest quality evidence but explores the others kinds of studies and their contribution to the evidence base, and whilst these are strictly hierarchical, include:

Randomised Assignment vs. matching Randomised are preferred but matched large well controlled matched contribute
Randomised Assignment vs. randomised quasi-experiments Randomised designs with analysis at the heart should be preferred.
Matched prospective vs. retrospective quasi-experiments. Prospective studies should be preferred to retrospective comparisons.
Sample size Small studies can have highly variable effects and suffer from publication bias. Larger studies should be preferred - weighting by sample size may be used.
Duration Exclude studies of less than 12 weeks in duration.

back to top

Not a universal viewpoint - detractors from the positivistic paradigm

Not everyone is happy that this is good even in medicine - from where Goldacre draws his position. Baum (2012) argues that whilst evidence is good this should not be equated to truth. Medicine has formed an evidence hierarchy with double-blind, plecebo-controlled, randomised clinical trials on top and experienced based clinical acumen at the bottom. Baum argues we should be more positive about deductive reasoning - moving from the general to the specific and inductive reasoning taking a specific example and looking to generalise. Baum argues that RCTs are prone to inductive probability reasoning and that significant reversals of previous trails, the use of probability as facts and the obsfucation of data under statistics creates a pseudo-myth of "what works" as proof.

In a long essay on the British Medical Journal website (https://www.bmj.com/content/348/bmj.g3725) Trisha Greenhalgh and colleagues argue that, although evidence based medicine has had many benefits, it has also had some negative unintended consequences. They offer a preliminary agenda for the movement’s renaissance, refocusing on providing useable evidence that can be combined with context and professional expertise so that individual patients get optimal treatment.

Rebecca Allen at the Institute for Education, herself a quantitative researcher and whilst a supporter of the positivist approach and quantitative data does make some caveats to the model. In a blog post, 'Evidence-based practice: why number-crunching tells only part of the story' she makes a number of comments:

  • It is important that RCTs sit alongside a body of wider evidence as the research design must come from somewhere as they (RCTs) are expensive to run. These ideas (or treatments) tend to come from more qualitative paradigms around existing practice using an interpretavist or ethnographic model.
  • The social science model of research is not ‘what works?’ but rather ‘what works for whom and under what conditions?’. The social context of a child shapes their learning more than it does for medicine - RCTs may tell us about the schools involved in the experiment but translating this to other schools is fraught with difficult
  • This problem (external validity) means that RCTs can give us AN answer but not THE answer and the validity declines as we try to implement the policy in different places and different time frames.

This social context is an important limiter on the positivist approach as social contexts change fast. As Allen puts it, "Whilst I would guess that taking a paracetamol will still relieve a headache in 50 years’ time, I suspect that the best intervention to improve pupil motivation and engagement will look very different to those we are testing in an RCT today".

Others are concerned that the positivist paradigm is being pushed by policy makers into educational research and there are some serious critiques of this positivist paradigm. One of the strongest critics is Feurdi who argues that teaching is "not some of clinical cure" and that using the language of medicalisation is to assume that education is best treated in an internationalist model and that schools are not like hospitals where we are trying to cure the illnesses of the pupils.

and that we should "keep the scourge of scientism out of schools" he argues that we are not willing to accept that there is complexity and nuance in the issues facing us in education instead we are looking to use the discourse of science to 'prove' a policy or educational method and whilst the scientific method can contribute to the discussion it is not the answer.

Frank Furedi - teaching is not some kind of clinical cure
Others contributing in this area

The Evidence Based Teachers Network (http://www.ebtn.org.uk)

The EBTN is open to anyone in the teaching profession interested in using evidence based methods either in their classroom or as a strategy for improving learning in their school or college. It places a focus upon evidence-based teaching, which requires consistent standards of evidence to be available before a teaching method is used. In reality, at present, a great deal of practice and guidance in education is not evidence based. This need to change is recognised but is not simple as many of those who follow this path soon discover that much research is contradictory.

John Hattie (http://visible-learning.org)

Professor John Hattie collated the impact on pupil achievement of more than half a million of the most effective research studies on teaching methods and the other variables that affect achievement. It is the biggest and most authoritative review of classroom-based educational research ever undertaken. It concludes that the sources of variance in student’s achievement result from:

  • Students themselves; this accounts for about 50% of the variance of achievement. The skills and attributes that students possess predict achievement more than any other single variable.
  • Home; this accounts for about 5-10% of the variance and is often linked to the levels of expectation and encouragement students receive from their parents or caregivers.
  • Schools; this accounts for about 5-10% of the variance – a similar proportion to the influence variance from home. School buildings, leaders, organisation etc barely make a difference to achievement.
  • Peer influence; this also account for about 5-10% of the variance.
  • Teachers; teachers account for about 30% of the variance. It is what teachers know, do, and care about which is very powerful in this learning equation and has the most impact after the attributes and skills possessed by the student themselves.

His meta study identifies the teacher characteristics that make the most impact as including:

  • Teachers taking responsibility; don’t blame the pupils;
  • Teachers as agents of change more than facilitators;
  • Teachers gaining feedback about their own effectiveness and progress;
  • Teachers who provide challenge that is more than “do your best”;
  • Teachers who welcome error and build trust among peers in classrooms;
  • Teachers who see assessment as informing them more than informing pupils;
  • Teachers as evaluators (of themselves more than of pupils).

Professor Robert J. Marzano (http://www.marzanoresearch.com)

Professor Robert J. Marzano working with other researchers in the USA, identified nine instructional strategies that they judged were the most likely to improve student achievement across all content areas and across all grade levels. These are explained in the book Classroom Instruction That Works and include:

1. Identifying similarities and differences
2. Summarising and note taking;
3. Reinforcing effort and providing recognition;
4. Homework and practice;
5. Non-linguistic representations;
6. Cooperative learning;
7. Setting objectives and providing feedback;
8. Generating and testing hypotheses;
9. Cues, questions and advance organisers

Geoff Petty (http://geoffpetty.com)

Geoff Petty is author of a number of publications aimed at increasing evidence based teaching to improve outcomes for pupils. He suggests there are two ways to increase the impact teaching can have on outcomes:

1) Using self-assessment to audit personal teaching strengths and weaknesses and to work on these. Petty’s website allows access to useful tools to support teachers with this. These include a self-assessment grid that encourages teachers to reflect upon a variety of standards for excellence e.g. goals and purpose, planning, content and presentation, atmosphere and relationships, the student experience, resources and achievement of objectives. In addition there are teacher and student questions and a proforma to support reflection

2) Identifying the main factors that make the biggest difference to student learning (the focus of this website). Research that supports the John Hattie findings, indicates that these include:

  • Active Learning - set students challenging activities so they apply and check their learning
  • Feedback - learners need information on what they do well and how to improve; then they need to act on this. Teachers can give feedback but students can too with self- assessment and peer assessment. The best feedback (or formative assessment) uses student's work to diagnose strengths and weaknesses and to set individual targets for improvement.

Sutton Trust - EEF Teaching & Learning Toolkit (http://educationendowmentfoundation.org.uk/toolkit/)

The Sutton Trust-EEF Teaching and Learning Toolkit was originally produced as the ‘Pupil Premium Toolkit’ by Durham University in May 2011 by Professor Steve Higgins et al. It aims to be an accessible summary of educational research that provides guidance for teachers on action they can instigate to improve the attainment of disadvantaged pupils. The Toolkit examines thirty different aspects and provides a summary of their average impact on attainment, the strength of the evidence supporting them and their cost, the major contributors to learning success were:

  • Feedback (8 mths)
  • Metacognition and self-regulation (8 mths)
  • Peer tutoring (6 mths)
  • Early Years interventions (6 mths)
  • One to one tuition (5 mths)
  • Homework (secondary) (5 mths)
  • Collaborative learning (4 mths)
  • Phonics (4 mths)
  • Small group tuition (4 mths)
  • Behaviour interventions (4 mths)
  • Digital technology (4 mths)
  • Social and emotional aspects of learning (4 mths)

Institute for Effective Education (http://www.york.ac.uk/iee/index.htm)

The IEE based at the University of York conducts rigorous evaluations of education programmes and practice and its website allows access to up to date information in this field.

The Evidence for Policy & Practice Info. & Co-ordinating Centre (EPPI) (http://www.eppi.ioe.ac.uk)

The Evidence for Policy and Practice Information and Co-ordinating Centre (EPPI-Centre) is part of the Social Science Research Unit at the UCL Institute of Education. The EPPI-Centre is committed to informing policy and professional practice with sound evidence. As such, it is involved in two main areas of work:

1) Systematic reviews: This includes developing methods for systematic reviews and research syntheses, conducting reviews, supporting others to undertake reviews, and providing guidance and training in this area.

2) Research use: This includes studying the use/non-use of research evidence in personal, practice and political decision-making, supporting those who wish to find and use research to help solve problems, and providing guidance and training in this area.

Working Out what works (Research Ed) (http://www.workingoutwhatworks.com)

Welcome to researchED, the online home for anyone interested in educational research, what it means, and how it can - or can't - make a difference in the classroom.

researchED is a grass-roots, teacher-led organisation aimed at improving research literacy in the educational communities, dismantling myths in education, getting the best research where it is needed most, and providing a platform for educators, academics, and all other parties to meet and discuss what does and doesn't work in the great project of raising our children.

The Education Datalab

Education Datalab brings together an expert team of academics, researchers and statisticians specialising in the analysis of large-scale administrative and survey datasets. It produces independent, cutting-edge research that can be used by policy makers to inform education policy, and by schools to improve practice. It works collaboratively with research partners and makes sure that its published research is accessible to policy makers and schools.

back to top
What do you think?

So, what do you think?

  • Who should be telling you how to teach in your classroom?
  • What is the range of evidence that you should be drawing on?
  • How should you be engaged in contributing to this evidence?
  • Are your pupils unique or can they be treated as part of a greater whole?
  • Do you know your pupils best or are they just part of a greater trend?
Reading / Resources
  • See the resources pages on research sources - go to page
  • Coe, R (2013) Improving Education: A triumph of hope over experience - download
  • Coe, R (2014) What makes great teacher: A review of the evidence - download
  • Bald, J (2010) Randomised Controlled Trials - Technical Paper - link
  • Baum, S (2012) Evidence Based medicine, what's the evidence? - download
  • Biasta, G (2007) A critique of the nature of positivism “Why “What Works” Won’t Work: Evidence Based Practice and the Democratic Deficit in Educational Research” - download
  • Coe, R (1999) Manifesto for evidence based education - download
  • Goldacre, B (2013) Building Evidence into Education for the DfE - download
  • Goldacre, B (2013) Teachers need to drive the research agenda in the Guardian online (see the comments for a very good discussion of the issues around RCTs) - link
  • Furedi, F (2013) A critique of "what works" research by - link

For research methods some starting places are:

  • Mertens, D.M. (2005). Research methods in education and psychology: Integrating diversity with quantitative and qualitative approaches. (2nd ed.) Thousand Oaks: Sage
  • Cohen, L., & Manion, L. (2011). Research methods in education. (7th ed.) London: Routledge.
  • Creswell, J.W. (2003). Research design: Qualitative, quantitative, and mixed methods approaches. (2nd ed.) Thousand Oaks: Sage.
  • Somekh, B., & Lewin, C. (2005). Research methods in social sciences. London: Sage
back to top