The New Zealand Initiative recently commented on a pilot of new reading, writing and numeracy assessments for NCEA. Just a third of students participating in the pilot met the standard for writing. One third met each of the reading and numeracy standards.
Commentators from the education sector were surprisingly relaxed about the findings.
The Ministry of Education asserted that the trial was “small-scale and not representative”. In other words, they’re saying that their own study didn’t include enough students to yield reliable results. They’re also saying that the sample of schools involved wasn’t a good snapshot of New Zealand’s schools in general.
Let’s take these claims one at a time.
The numeracy assessment included 1,055 students. That’s about the same numbers as in a standard political poll. With a sample that size, we can be confident that, if the assessment was run nationally, between 62% and 68% of students would meet the standard.
The other two assessments were completed by fewer students – 590 in reading and 554 in writing. Even so, that’s enough to give us fairly reliable estimates. Based on the 67% success rate in the reading assessment, between 64% and 71% of students nationally would be expected to succeed. For writing, in which just 35% of students passed the assessment, we’d expect between 31% and 39% nationally to succeed.
The argument that there’s nothing to worry about because the pilot was “small scale” doesn’t wash. Even performance at the upper ends of these margins of error would signal dismal results if the standards were implemented nationally.
That’s assuming, however, that the samples were not biased. This brings us to the Ministry’s second objection to the Initiative’s commentary – their claim that the sample of participating schools was not representative.
Participating schools spanned the decile range. The sample included schools in both urban and rural locations. The Ministry’s report does note that more than a third of the numeracy results came from one high decile school. Students at high decile schools generally do better in assessments than students at low decile schools.
Of participating students, 18% were Māori, 5% Pacific and 67% European. Comparing with the Ministry’s national data on student ethnicity, Māori and Pacific students were somewhat under-represented in the pilot study. European students were similarly over-represented. It’s no secret that, regrettably, Māori and Pacific students do less well on average in our education system than European students.
We might conclude that the Ministry was right – the participants in the study were not perfectly representative of the national picture. However, the ethnicity profile of the sample suggests that, if the assessments were implemented nationally, results would be even worse than they were in the pilot.
Pip Tinning, Vice President of the Association of Teachers of English, pointed out that most of the pilot participants were in Years 9 and 10. Students don’t usually commence gaining credits towards NCEA until Year 11. This is the strongest argument that the pilot assessments do not signal as great a catastrophe as the Initiative says they do.
Even so, 90% of the participating students were in Year 10. Research commissioned by the Tertiary Education Commission, published in 2014, suggests that students don’t typically make much progress in literacy or numeracy between Years 8 and 11.
Vaughan Couillault, President of the Secondary Principals Association, came up with an imaginative objection to the Initiative’s commentary. He likened the pilot to road testing a car, as opposed to testing the ability of the driver. He meant that the pilot was about making sure the assessments were of good quality rather than actually testing the students’ literacy and numeracy. His implication is that the pilot did not yield valid results for the participating students.
Couillault is correct that the primary purpose of the pilot was to evaluate the assessment process. However, his comparison with road testing is a false analogy. When cars are road tested, experienced drivers are used. Furthermore, cars are not designed to assess the skills of drivers. The pilot showed that the assessments are valid and reliable, so the results for the participating students are too.
Year 11 students may well do a little better than Year 10 students. It would, however, be most unwise to rely on one year of additional schooling to make a decisive difference to the pilot results. It’s also possible that some students didn’t take the assessment seriously, given that it was a pilot and didn’t have personal consequences for them. But even if all of the success rates came up by ten percentage points, we’d still have a quarter of our young people failing in reading and numeracy, and more than half failing in writing.
What is most disappointing is not the pilot results themselves but that influential spokespeople for leading educational organisations seem so determined to explain them away. It’s not as if there’s no corroborating evidence. Results from international tests and reports from independent education researchers point to ongoing decline in New Zealand's literacy and numeracy education.
Rather than flailing about trying to defend the indefensible, we need an urgent review of the way in which literacy and numeracy are taught in primary schools. Until our educational ostriches pull their heads out of the sand, our young people will continue to be sold short.