Not SATisfactory

How the KS2 reading test fails working-class children


Dr Wayne Tennent is a Senior Lecturer at Brunel University, London, England.

The 2023 analysis of Key Stage 2 SATs scores showed a correlation between levels of affluence and success on the tests. Schools in poorer areas did less well - an ongoing trend.

This year, teachers and parents voiced concerns regarding the perceived unfairness and difficulty of the KS2 Reading test. These concerns centre on the length of the texts used, but some pointed out that the content reflected middle class experiences and language. 

There are yearly rumblings of discontent, but the TES reported particularly on the notorious 2016 ‘Dodo’ paper, and the 2011 paper which was about visiting a cave system in (what appears to be) Derbyshire. Both papers provide examples where the content might not be seen to reflect the lives of working-class children. 

This criticism is not new. Christensen (2000) suggested that large-scale reading tests tend to use texts that reflect middle class experiences.  In addition, test designers are likely to be middle class themselves and will write, perhaps unintentionally, in relation to their own cultural background (Harkins and Singer, 2009).

The Department for Education has deflected away any criticism of class bias in the test. In 2016, for example, it offered ‘no apologies’ for the texts used and stated that the curriculum is there to support all children in developing a wider vocabulary to deal with such texts. 

This, of course, fails to deny that the texts used in the Reading SAT reflect middle class experiences – but also places the onus on schools to resolve it, implying fault lies with working-class children (and probably their teachers) – not the test. The test, however, is flawed in numerous ways:

Flaw 1: The SAT paper does not measure the entire reading curriculum, only those areas which lend themselves to a pencil and paper test. As such, using the results of the test, which are only a partial measure of ability to assign a child with a definitive categorisation of ‘Below Expected’, for example, is unfair and an example of overclaiming.

Flaw 2: Final categorisations, such as ‘Expected’ level, are based on scores. They don’t state what the reader can actually do. There is no research evidence which outlines age-related expected levels for reading comprehension. There is no research to support the labels used.

Flaw 3: There is little value in putting a score to comprehension (Kintsch & Kintsch, 2005). 

The text comprehension process is complex. It involves a number of cognitive, meta-cognitive and linguistic processes working inter-dependently with relevant knowledge bases. Researchers in the field acknowledge that no single test can capture this complexity.

Flaw 4: Readers bring knowledge and their experiences to the reading of text. We cannot be certain whether the ‘correct’ answer has come from reading the text, or from what’s already in the pupil’s head. No test of reading comprehension has ever been able to ‘control’ for this.

The Reading SAT doesn’t measure what it claims to measure, and perhaps it is time to consider alternative approaches. The most obvious of these would be to scrap the test and develop a teacher-assessed moderated approach (as with KS2 writing). 

If the desire is to continue with an end-of-key stage reading test then greater thought needs to be given to the content of the texts used. One option would be to provide Year 6 teachers with a theme at the start of the academic year which will eventually be used in the SAT. This would create a more universal knowledge base, and also mitigate against teaching to the test to some extent. 

Alternatively, topics should be chosen which a greater number of children are likely to have knowledge about. Over a million children in the last year have been supported by food banks. Perhaps texts about visiting food banks might reveal more about children’s actual reading ability than texts about visiting caves in Derbyshire.


Christensen, L. 2000. Reading, Writing, and Rising up: Teaching About Social Justice and the Power of the Written Word. Milwaukee, WI: Rethinking Schools.

Harkins, M. J., and S. Singer. 2009. “The Conundrum of Large Scale Standardized Testing: Making Sure Every Student Counts.” Journal of Thought 44 (1-2): 77–90.

Kintsch, W. and E. Kintsch. 2005. “Comprehension.” In Current Issues in Reading Comprehension and Assessment, edited by S. G. Paris and S.A. Stahl, 71–92. Mahwah, NJ: Lawrence Erlbaum Associates.

Back to top