When Accountability Measures Eat Themselves: League Tables & Measuring Performance for a Range of Abilities
9th January 2012
Consequently, Rebecca Allen and Simon Burgess wrote an excellent paper in 2010 showing how the five A*-C school pass rate is not enough information to know how a school will provide for your own child. If your offspring are precocious and enters secondary school with an above-average level of knowledge you might expect them to go on to achieve all As – but picking the school with the highest pass rate could conceal the fact that no students at all achieved A grades in that school. In that case, you don’t want to send them there. Equally, if your child is most statistically likely to get Ds given their KS2 performance the 5 GCSE ‘pass rate’ isn’t telling you how likely your child is to secure those Ds as opposed to Fs or Gs.
One of the metrics Allen & Burgess suggested as more useful for parent choice is a ‘percentile’ measure. Each school would report the average results for its students who enter with KS2 scores in the 20-30% range (low), 45-55% range (medium) and 70-80% range (high). The scores should be in grades so parents can easily compare. For example:
Low: EEDDD
Medium: CCCCCCC
High: BBBBAAAAAA
Publishing this information in the league tables could therefore be considered a way of ensuring all schools support and challenge students across the ability range.
BUT, BE WARNED! The DfE Measure does not follow this advice. The Statement of Intent (p.4) shows ‘Low’ will be pupils below Level 4 (the average score at KS2), ‘Middle’ will be pupils at Level 4 at KS2, and ‘High’ will be pupils above Level 4.
MEDIA HOUNDS TAKE NOTE: The changing of this measure means it cannot be used to compare across schools. ‘Low’ at one school could mean a much higher number of students entering on Levels 1/2 where another school only has students on Level 3 in that category. Comparing scores for ‘low scoring’ students when those students have such significantly different entry scores is mathematical gibberish.
EVEN MORE IMPORTANT: The thing no-one has mentioned is that Allan & Burgess found that making a school choice based on these ‘ability pass rates’ didn’t actually help parents make better choices. In fact, in the case of bright kids it led to worse decisions.
Allen & Burgess also note: “.. (T)he best performance information is only slightly more useful in school choice than a school’s composition, measured by the average prior attainment of pupils entering the school”. That is, the best indicator of how well a kid will do when they leave school is how many bright kids they entered with. Allen & Burgess say this is not because of a ‘peer group effect’ but because schools with high scoring kids tend to have the most resources, the most voluntary help, more stable teacher turnover, and more experienced higher-quality teachers.
The measure will therefore not support parents in their decision-making at all. The only reason it is being added is to ensure schools focus on students of all abilities – which is definitely something they should do. But this is a measure being introduced to deal with the perverse effects of another measure. Does anyone else get the feeling this could go on indefinitely?!
Very interesting. I just have a couple of questions.
“making a school choice based on these ‘ability pass rates’ ….., in the case of bright kids it led to worse decisions.”
Could you explain why?
“Allen & Burgess say this is not because of a ‘peer group effect’ but because schools with high scoring kids tend to have the most resources, the most voluntary help, more stable teacher turnover, and more experienced higher-quality teachers.”
Is this because these pupils will tend to be middle-class? Or at least have asprirational parents?
Thank you.
Re: bright students being slightly worse off. I can explain the stats though not precisely why this group rather than another was affected.
The stats Allen & Burgess ran were able to look at data a student entering in, say, November 2001 might have looked at and then compared their likely outcome based on the *actual* school results in July 2007 (when that student would have left secondary school). School performances fluctuate and they did so across the schools selected in a way that meant a parent of a bright child who picked the school with the most suitable score for their child in 2001 would actually have found that by 2007 a *different* school was getting better results for its brightest students.
Why?
As for the school with the brighter kids having more voluntary help, stable teacher turnover and higher-quality teachers is simply to do with where professionals generally prefer to work – especially for long periods of time. This might be because those students are more willing to learn and therefore the teacher is able to feel more successful, but many other factors are considered in the teacher retention/recruitment literature too. One thing is known: schools were students have lower abilities and higher levels of deprivation have much higher rates of teacher turnover (which affects quality through various mechanisms).
Thanks Laura.
So, a large bright cohort brings stability and resources to the whole school. But their achievements are not a reliable indicator of how a future bright pupil will progress? If I’ve got that right It seems a little odd.
Looking at this from a parents perspective; league tables are only part of the decision process. Certainly I would want to see results from several successive years and put this the context of other factors. I would be looking for a reasonable score, rather than just selecting ‘the best’ I doubt many parents would make a judgement based on just comparing one year’s results across schools.
I think the point is that previous pupils’ *progress* is not a good predictor of future pupils’ progress but that composition of the school is (because of what it tells you about school resourcing). Is that right Laura?
I guess my [admittedly obscure] point is the two should be related. A significant intake of bright pupils should (on average) lead to better results for the bright cohort (due to the better resources, teachers…). It seems worrying if not.
I understand the point that making the decision on one data-point is risky (mainly because sample size is too small). But doubt many parents do this.
Yes, I agree that more bright pupils should lead to strong results for a bright cohort. The issue is which measure has *more* predictive validity – the score of the bright pupils, or the number of them in a school.
Burgess & Allan suggest the number of them in a school is a better number to use as it is more stable. You could have a school with a very small number of bright pupils who do extremely well, but this number fluctuates more, and those few pupils don’t affect those other factors (teacher turnover, etc) so schools with high numbers of bright pupils doing well one year are still vulnerable to declining performance going forward. Whereas if you choose a school with a LOT of bright students those tend to have a more stable pattern of exam results.
Hi Laura
Great blog again.
One question that has sprung to mind when looking at these High achievers will have gained a L5 at primary school. Currently, those at the top will have averaged 5A, but based on the current progress measures this equates to a B. From my point of view this means that a Grammar school could have a cohort that all achieve 5 B grades, massively underachieving for their ability, but the school will have 100% in this measure.
I was wondering about this too. The measure seems to be what % make ‘expected progress’ so I wonder if it is based on the current progress measure (e.g. 5A to a B for each subject) but there isn’t much of a breakdown on it. In fact, when you look at the list of new measures in full (p.10/11 of the statement of intent) there are quite a few I am scratching my head to know how they will calculate!