Level Playing Field – the Problem with Levels
As we move to the new primary maths curriculum, the old NC levels no longer apply but we are still waiting to find out what will replace them under the new assessment arrangements which have not yet been finalised.
I suspect many primary schools will continue to use the levels for the purposes of tracking children across the school or key stage. Whilst I agree that it’s important to track children’s progress in some way over time, over the last few years I have had increasing misgivings about the way the NC levels are used to do this. I feel there are several problems with this and I’d like to outline some of them here.
How we assess
QCA tests are commonly used to assess children at the end of a school year, and often at other points in the year as well. Others use other commercially published assessments. My problem with these is that in my experience, they are not reliable. As a maths leader, I got to know the year groups where we could expect rapid progress (according to the tests) and those where progress would be much slower and this often remained the same year on year, regardless of which teachers were in those year groups. As a class teacher, I got to know the tests that were likely to make my children look good and those that weren’t. I like to think that I bore this in mind in using my teacher judgement to moderate the results from the tests but see my later point about performance management.
To be fair, most schools don’t rely completely on test results and teacher judgement is used as well. The problem with this is that it takes a lot of experience to really know inside out what children ought to be able to do to achieve a particular level, let alone then knowing which sub-level to give. In theory, APP should have helped with this, but most systems are cumbersome to use and often wrongly used – just because I can find evidence in a child’s book that he or she has been adding 3 digit numbers, does not mean that they are secure in this.
Who does the assessment
In most cases, assessment is done by the class or group teacher. In theory, this is great. They are the person best placed to know what the child can really do, to be aware that a bad performance on a test is not typical, for example. They are in a position to see which aspects of maths children are secure on and which they need to revisit. However, in many cases, the progress of children in a teacher’s class or group contributes to their performance management targets, and now with performance related pay, the stakes are even higher. Added to this, in the current educational climate, is the ever present threat of capability procedures for those whose children’s progress dips. I do believe that most teachers try to act with integrity but the high stakes attached to progress but huge pressure on them to report optimistically. Unfortunately too, there are definitely teachers who knowingly play the system. In one school I worked in, a recently appointed class teacher discovered from his TA that the previous teacher had always gone over tests with the children just before they took them. The poor TA, herself fairly new, had assumed this was common practice!
The precision to which we track
When levels were first introduced, they were meant to give an overview of what should be expected of average children at certain stages. So the components of level 2 were those which an average 7 year old would be able to do. (Later these average expectations somehow became minimum expectations, but that’s a whole other blog!) However, this made tracking progress across key stages difficult because children would typically take 2 years to move up a whole level. So sub-levels were introduced and APS points. Many primary schools now use these APS points to track progress termly but levels were never intended to track progress at this level of precision. We simply can’t measure progress precisely in the way that we measure, say length precisely. There may be some justification for comparing the progress of different cohorts from the end of KS1 to the end of KS2 because at least here we are broadly speaking comparing progress with similar start and end points. But comparing how much progress one set of children have made in a single term of Year 2 against the progress another set of children have made in the same term in Year 3 is just not valid, in my opinion. When I was in Year 6, I knew that if children came up to me at the start of the year with a 3A, I had a fighting chance of getting them to level 5 by May, progress of at least 4 sub-levels. It would be very unusual for a Year 3 teacher to move a child at 1A at the start of the year to level 3 by the end of they year, and I would imagine it only happens very rarely. Yet, we make judgements about teachers based on comparing situations like this. From experience, I feel that level 2 is probably the level that takes the longest to move through. The jump from a 2C to a 3C in terms of conceptual understanding and skills is huge. Is it any wonder then that there is typically a dip in progress in Year 3 where the majority of children will be in the process of moving through level 2?
Life after Levels
I’m aware that I’m putting forward lots of problems about tracking progress using levels without really suggesting a solution. I’d suggest that any way of tracking progress term by term or even year by year is bound to be fraught with problems. Some have suggested that with the new curriculum we use a system similar to that currently used in Early Years, where children are judged to be Emerging, Expected or Exceeding the expected standards for each year group, but many of the problems outlined above would probably still apply. All I would urge is that any system of tracking progress using data is treated with great caution. The removal of the high stakes involved might also help teachers make more carefully considered judgements.