Tuesday, April 22, 2014

Fraser Institute: A Look at the Standings from the Numerically Changeable Top

 Saying the name of this organization to teachers or administrators of schools in BC will prompt responses: some will be positive and some will be words that I talked about in my last post. The school I work at happens to be at the top of the pile of high schools in our city this past year (1st for high schools [1 of 12] and 6th for Elementary schools [6 of 30]) and, yes, we are happy (we'll take it). However, what strikes me about our standing is how easy it could have been different. Also, it brings to question how accurate it is.

Our school is small: the n of the students used by the Fraser Institute was 19 for the high school. Our mark was a 7.4 (out of 10). Again, this doesn't speak well for all schools in our area as we, the highest ranked high school this year, only received a mark of 74%. For the Grade 4 (Elementary), the n was even smaller (13) and we received a 6.9 ranking (6th out of 30 elementary schools in our area, which is good).

Interestingly, of the top 10 ranked high schools, only 3 of them had over 100 grad students. Does this suggest that smaller schools do better? Let's go one step further: do better at what exactly? 

My initial thought on the issue of working in a small school and the Fraser Institute ranking system is this: numerically, when you have only 16 students as we did, having a few students who did not for whatever reason do well on their exams will bring the institute's score down, perhaps by quite a large amount. In fact, the Fraser Institutes' own "How To Read The Ratings" help file has this to say: "Indicator results for small schools tend to be more variable than do those for larger schools and caution should be used in interpreting the results for smaller schools." And yet, overall, it is the smaller private schools in BC that seem to consistently do better in this form of rating than the larger public schools.

The Fraser Institute uses many measuring points to come up with a total for a school:"The Overall rating out of 10 takes into account the indicators a) average exam mark, b) percentage of exams failed, c) school vs exam mark difference, d) gender gaps, e) graduation rate and f) delayed advancement rate." According to Sridhar Mutyala, in his posting entitleThe Real Problem With The Fraser High School Rankings – Part 1, the percentages of each indicator is as follows: 
  • average exam mark—20%
  • percentage of exams failed—20%
  • school vs exam mark—10%
  • English gender gap—5%
  • Math gender gap—5% 
  • courses taken per student—20% - This seems to have been taken out this year, so the rest of the weighting has changed a bit for the 2013 year.
  • diploma completion rate—10%
  • and delayed advancement rate—10%. 

I would encourage anyone to look at Mr. Mutyala's post as he does a great job of looking at the statistical issues with the ranking and indicators used by the Fraser Institute. However, to speed things up a bit, here is a synopsis: the issue that Mr. Mutyala has with some of these indicators, and I tend to agree with him, is that many of them have very little to do with how a school does in educating their students (and here I will not even go into whether these indicators have anything to do with how a student is "educated"). So why were these indicators chosen? According to the Fraser Institute, the indicators cover the following areas: three indicators deal with effective teaching, there are indicators showing consistency in marking (and gender gaps), and there are indicators showing practical well informed counselling. 

However, I go back to the notion that small changes in one or many indicators can make large differences in the overall mark. For example, if I use our school as an example, when one student out of 16 fails an exam, the overall mark of the school will drop dramatically.  As well, if a grade 12 student drops out of school (and there can be various reasons for this that are not controllable in any way by the school itself) it can also heavily affect a school's overall rating. 

Notice, however, that small schools seem to do better on the Fraser Institute report. In fact, if all your grads graduate and do well on their exams, they will rank high on the report; even higher than larger schools with the same scenario. According to Mr. Mutyalah, the mix of variables and "ad hoc weighting" of the indicators by the Fraser Institute (how did they come up with those percentages?) will actually favors small schools over larger schools. This trend is easily seen when you look at the Fraser Institute's ranking in comparison to raw test scores

The question that this raises is this: are exam scores an acceptable way of measuring a school's ranking? I suggest "no", and arguments could be made here about the pros and cons of standardized testing as sole indicators of education - this topic is much too large to broach here.

With all of this in mind, I conclude the following: the ranking by the Fraser Institute is not the sole factor that families should base their decision on school choice for their child. Instead, parents need to look at many factors, including school demographics, ethics, beliefs, etc. (what is important, what is not) and begin there: how do you want your child to be educated (what is an "educated" student in your eyes?). granted, parents should keep an eye on academics and, yes, this is important. Yet, the schools themselves (teachers and administrators), should always look at achievement scores of their students and respond accordingly; but there are many factors that will need to be addressed on a per student basis to help each student reach their academic potential.

As well, let me add this comment: perhaps students themselves should become more involved in their own education, and schools themselves should instead only be the place where they go to work through their educational goals and not a place that is responsible for how they did

No comments:

Post a Comment