In my Aug. 26 column, I pointed out that B.C.’s school system ranks first in the country for performance. That claim was based on standardized test scores published by the Organization for Economic Co-operation and Development.
In response, several readers sent me strongly worded emails, many of whom, I suspect, are teachers, given their in-depth knowledge of the subject. And all raised the same issue, namely that standardized tests are a lousy measure of school performance. One reader thought any such exams pointless, because the quality of education is unmeasurable.
All views are welcome, and while I always answer emails, I usually let it go at that. But this is a hugely important issue that deserves a closer look.
The argument against using test results as a measure of school performance is straightforward. It’s well known that socio-economic factors affect the rate and extent of student learning. Kids from rich neighbourhoods, on average, do better than kids from poor neighbourhoods.
Why, then, should school systems be evaluated using such a demonstrably weak assessment tool? That’s one reason most teachers and administrators reject the Fraser Institute’s annual school ranking.
While the institute has in the past incorporated average parental income as one measure of socio-economic status, it ignores many others. These include such important factors in the catchment area as high levels of unemployment, prevalent drug abuse, numerous single-parent families, large immigrant populations from non-English speaking countries, and so on.
So yes, raw test results that fail to take account of community well-being are a nearly useless measure of school performance. Many educators — both teachers and administrators — take that to be the end of the discussion.
But it is not the end of the discussion, it is merely the beginning. For standardized exam results can be used as a reliable indicator of school performance — if they are handled properly.
In many respects, the world of K-12 education mirrors the world of hospital care. Here too, socio-economic factors affect outcomes. Thus, if a hospital is located in a low-income neighbourhood, in an area with a larger than usual population of elderly residents, or in a community with high levels of drug abuse, the results it gets will suffer.
Let’s take a specific example. One measure of hospital performance is how many patients admitted with a heart attack survive for 30 days.
If a disproportionate number of those patients have other problems as well, such as diabetes (linked to obesity and poor diet), hepatitis (linked to drug abuse) or depleted immune systems (linked to old age), more of them will die. Is that the fault of the hospital? Clearly not.
But there is a way of dealing with this. The technical term is multivariate analysis, but all it means is that you take these external factors into account when ranking a facility’s performance.
So let’s say your 30-day heart attack survival rate is 70 per cent — a very poor score. If you have an abnormally challenging patient group, however, that score is adjusted upward. Likewise, an apparently brilliant score might be lowered if your patients are all young, fit and healthy.
The Canadian Institute for Health Information conducts this analysis on nearly every hospital in Canada, across a far wider range of outcomes than educators employ. If it works for hospitals, there is no reason why the same approach cannot be used in evaluating schools.
The only obstacle is stubborn resistance within the education community. But while that might be an explanation, it is not an excuse.
No public service should expect to operate without effective oversight, and that includes objective, statistically reliable performance measurement. The day is long overdue when our public school system took this lesson to heart.