The Governor’s statement includes some actual statistics from the report. However, nowhere in the DESE’s press release is the College Board’s disclaimer:
Media and others often rank states, districts and schools on the basis of SAT® scores despite repeated warnings that such rankings are invalid. The SAT is a strong indicator of trends in the college-bound population, but it should never be used alone for such comparisons because demographics and other nonschool factors can have a strong effect on scores.
In other words, rankings of schools and states do not mean anything. Why? For one thing, kids who take the SAT are self-selected. For another, we know the results of kids who took the SAT, but we have no way of knowing how representative they are of their state or school.
Perhaps more interesting and NEVER considered in popular accounts of test results is the reliability of the test itself. The SAT is highly reliable for a test, but like the polls we like to analyze here at BMG, it comes with a margin of error. Harvard Professor Daniel Koretz states the issue neatly in his book Measuring Up:
the margin of error on the SAT is not negligible–to have even a 5 percent chances of scoring more than 66 points above or below the true score is not trivial.
In other words, the 600 your kid scored on her SAT math could really be as low as a 544 or a 666. How do you think that margin of error relates to aggregate scores for groups? Not at all?
If there is a moral to be taken from the College Board’s yearly release of SAT scores, it’s that as a country, we have yet to start talking seriously about what test scores mean and don’t mean.