“When… a military leader takes upon himself the responsibility for an attack… he choses to do it… no doubt he acts under higher command, but its orders, which are more general, require interpretation… action presupposes that there is a plurality of possibilities.”
Sartre, ‘Existentialism and Humanism’ p.35.
Of all the stock phrases used by politicians to justify policies, “that’s what the evidence tells us” is one of the most over-used and disingenuous. It shifts responsibility from policy maker to abstract “evidence” and suggests that there is no longer any judgment to be made or criticised. It presents policies as if spat out by a black box; taking in evidence at one end and producing objective decisions at the other. The problem is that there is no such thing as “the evidence.” There is no self-contained and consistent body of information that hangs there, ripe for plucking and incontrovertibly pointing the way to particular policies. On top of that, when voters elect politicians they are deciding who they think is most likely to make the best decisions. Quoting “evidence” does not absolve them from responsibility for their judgments.
The “evidence gambit” is one of Michael Gove’s particular favourites. The DfE website hosts seven of his 2012 speeches and together they contain a total of 12 references to “evidence.” His (and other policy makers’) use of “evidence” exhibits three main flaws: Construct validity, selectiveness, and applicability.
When Gove declares that x is done by “the world’s most successful education systems” it is worth pausing to ask what a “successful education system” is and to what extent it correlates with what most of us would want for our children from a good education. For Michael Gove, countries which provide a good education are those in which a sample of 15 year olds get correct answers in three exams (in their national language, Maths and Science) i.e. PISA. If we compare that to what parents actually want from education and how parents decide which schools to send their children to*, Gove’s conception of a successful education hardly seems equivalent. Ask yourself: would knowledge that your children would get correct answers in three exams aged 15 be sufficient to convince you a school would be good for your child? If so, why do schools hold open days and write brochures and why do we fund Ofsted? Learning from countries that do well in PISA requires a judgment that these are genuinly the best education systems. Unless it can be shown that parents would like their children to be educated in South Korea, Finland, Shanghai or Singapore’s schools there is a missing step in arguing that “autonomy-driven improvement is solidly backed by rigorous international evidence” (Gove speech, 4th January 2012).
Secondly, a whole range of inputs go into a system and a whole range of information is available about “successful” countries. Use of evidence is therefore inherently selective. Policy makers make decisions as to which inputs are causing success (and which would lead to the same success here).
Gove has argued for the Academies program on the grounds that it “is not about ideology. It’s an evidence-based, practical solution” (
Gove speech,
4th Jan 2012) but decisions as to what evidence is relevant are inevitably political and as
LKMco policy development partner Laura
McInerney has argued, evidence is frequently used as post-facto justification for what policy makers want to do rather than as a guiding beacon. For example,
the recent national curriculum review recommended the use of “educational aims” because it “is well documented in international evidence”. However, although all the highest performing countries have a compulsory Citizenship curriculum
it did not recommend this. Similarly, when the government decided that funding for teacher training should be withdrawn for graduates with less than a 2:2, Finland’s system, in which only top graduates become teachers, was very useful evidence. However, when the
Ofsted framework was made more demanding, the Finish system, which does not involve inspection, was a rather less convenient piece of evidence. In “
The Curious Case of PISA and Suicide” I take this problem to its extreme and show that not a single country achieved higher on PISA without also having a higher suicide rate.
Policy makers also exercise judgment in selecting what types of information count as evidence. For example:
“Bennett (1996) found class size to be a paramount concern among parents. It was found that three-quarters of parents knew exactly how many children were in their child’s class and of these over 60% felt that this was too many. Over 96% of parents believed that the number of children in a class affects the quality of teaching and learning” (
DfE 2011)
Indeed, class size is one of the reasons many parents pay for private schools (
Ipsos Mori 2008).Teachers and pupils also describe class size as an extremely important factor in the quality of their education. However, because regression analysis of large data sets, international comparison and
randomised control trials tell us it
is not an important factor in attainment, we are told that “the evidence” tells us class size should not be a priority. Of course, there are a lot of good reasons for saying that large scale quantitative studies do offer better evidence for what affects educational (exam) attainment but again, we fall into the question of construct validity between exam attainment and “quality of education” and also see that it is policy makers’ judgment that has determined what types of information should count as evidence. These judgments may be correct, but they have not been made explicit. Plenty of parents and voters would surely be of the view that teachers’, parents’ and pupils’ views count as some sort of evidence for what makes a good education.
Finally, there is the question of applicability. In ‘World class schools’ – noble aspiration or globalised hokum? (Compare, Vol.40, No.6, December 2010) Professor Robin Alexander points out that Singapore has 356 schools compared to our 20,000 and suggests that simply replicating what is done there is not very good use of evidence. Much has also been written about the relative homogeneity of Finish society and the differences between learning to read a largely phonetically spelled language and English. Again, we see that it is nonsense to pretend that “evidence based policy” is merely a case of taking what happens in these countries and spitting it out as incontrovertibly right because “the evidence backs me up” (Gove speech, 24th March).
For Sartre, people exhibit bad faith when they hide their innate freedom and divest themselves of their own responsibility. If we accept policies as the output of an automatic machine which converts evidence into decisions, we risk allowing our politicians to get away with ideological decision making without the scrutiny it deserves.
Enjoyable blog. I agree with your argument. I also happened to have read that Sartre book at university a long time ago!
It is also worth remembering that Gove was criticised by Academics for highly questionable use of statistics / research in a paper he co produced for policy exchange. I came across a reference to this while researching a paper but can’t remember the reference – I think it was a Stephen Ball article.
I was part of a team that wrote a report on how education policy is formed, and came across many of these problems/issues (http://www.cfbt.com/evidenceforeducation/our_research/evidence_for_government/funding_education_in_england.aspx) . Also, when beating up the system, it’s always useful to find selective comparisons. Geoff Stanton points out that we compare our vocational system with Germany, and our university access with the USA: oh, we’re such crap ! But if we compared our university access with Germany, and our vocational training with the USA, we are actually not bad at all.