The Editors
I like to think that the biases of reporters don’t directly influence their reporting. I may be wrong about James Vaznis, but I think he’s trying conscientiously to do his job. In the case of his editors, however, I have serious doubts.
I’ve heard that top editors don’t tell their reporters what to write. I believe it. But I also believe that editorial biases trickle down. Editors assign stories, provide sources, suggest story angles. And reporters may read the editorial page of their own papers. I don’t know exactly how a newsroom works, but I doubt reporters operate in a vacuum. It’s only my hypothesis, but I’m guessing that the editors’ position on educational policy is affecting news coverage for the worse.
Whatever Vaznis’s journalistic sins may be, I think those of his editors are greater. Not only do they double down on Vaznis’s inaccuracies, the editors of the Globe paint a black-and-white world of education policy: agree with them and you’re “exemplary”; disagree, or mention the color gray, and well, you’re a failure. In their editorial Sunday, they cast the question of teacher evaluation in Manichean terms:
The public will get a chance to evaluate teachers and their union leadership by how they respond to this proposal. Embracing it would be exemplary. Rejecting it or trying to water it down in collective bargaining would rate a resounding F.
Darn that collective bargaining! Who needs it when the editors of the Globe know what needs to be done?! Scott Walker anyone?
Ironically, the proposal put forth by Chester emerged from the recommmendations of a task force that benefitted from the input of educators and teacher unions. As MTA president Paul Toner has said,
“We have said from the start that the observation of educators at work and human judgment must still be the central components of an educator evaluation system, as they are for virtually all professionals… We have also said from the start that student learning outcomes at the classroom, district and state levels should also be reviewed and considered in the evaluation process because, at the end of the day, our main job as teachers and administrators is to improve student learning. However, we and 90 percent of the other task force members are also clear that there is no single measure, including MCAS, that fully, fairly and accurately identifies the effectiveness of any individual teacher. These measures are all prone to error. Therefore, while they should be considered, they must not supersede evaluator judgment.”
Ninety-percent of the task force and the MTA representation. It wasn’t that long ago that the Globe wrote about Paul Toner and the MTA’s support of teacher evaluation. And now they’re going to stand in the way?
Editorials are not formally held to the same journalistic standards as articles, but wouldn’t it be nice if they were? Wouldn’t it be nice if they didn’t impart research findings as if they were as unassailable as gravity? Here are the editors with truth from nowhere on teacher quality:
Effective teachers routinely impart a year-and-a-half-gain in student achievement over the course of a single academic year. Three or four consecutive years of exposure to that level of instruction can eradicate the achievement gap between low-income and high-income students. Bad teachers routinely secure just a half-year of student progress over the same period. A few years of that kind of instruction can lead to academic ruin.
These are research findings and should, therefore, be sourced, regardless of how well they support the editors’ position. By its very nature, research is not the final word on anything. It always has limitations.
Whether or not the editors know it, their source is Eric Hanushek, the formemost economist of education and Hoover Institute fellow, who has testified in front of the California State Assembly that money is really not a factor in educational achievement. (And therefore, there’s no need to increase the amount going to school systems serving the underprivileged).
When you stop to think that readers often don’t remember where they got information, this kind of stuff is troubling.
Gray Areas
The Globe editors would have us think that there are no gray areas in teacher evaluations or tying teacher evaluations to test scores. While they are right in suggesting that teacher holds promise for improving teaching and increasing student achievement, there are genuine complications. Here are a few:
1) Cost. Many school systems fail to regularly evaluate their teachers. This isn’t because of teachers. The fact is, administrators don’t have the time. It may be part of their job description, but they aren’t shirking their responsibilities, they are too busy to get to them. Requiring more extensive evaluations will require more administrators, or at least more evaluators, and will thus more money. Another dirty little secret of the existing evaluation system is that many administrators lack the training to conduct effective evaluations. Some administrators have also never taught in an academic classroom; before they taught, they were guidance counselors and physical education teachers and band directors. They lack the acutal classroom experience that would seem to make evaluation something simple. Training these folks will cost money too. These obstacles are not insurmountable, but unless regulations are either watered down or mandated without funds, there are going to be implementation problems.
2) MCAS Score Factors MCAS scores are problematic for evaluating teachers. One reason is the fact that so many factors that result in a test score; these range from student investment in the test to what students have learned in years prior to taking the test. MCAS can be useful as a rough measure of progress, but the single number they produce is next to impossible to attach to a single teacher.
3) Who Teaches to the Test? The second problem with MCAS scores is that not all teachers have them attached to their subjects. In high school, English and math are tested in sophomore year. Teachers who teach these subjects after 10th grade can’t be held accountable by MCAS scores. In my school 9th graders take MCAS biology. That means all the other science teachers are off the MCAS hook. And teachers of physical education, health, business, foreign languages, and electives? They don’t even have MCAS tests to teach to. Do we just ignore them when evaluation time comes?
The teacher evaluation proposal now on the table is the direct result of the participation of educators and teacher unions. We want to see it done and done right. We can only hope that that whatever system is implemented rises above the journalistic standards the Boston Globe has set for itself.
nopolitician says
Why can’t teachers be evaluated by their superiors, the way it is in virtually every other employment situation in the country?
<
p>There seem to be only two other alternatives: no evaluations, or evaluation by a set of machine-based rigid deterministic measures. The former seems silly, and the latter seems sillier — I know of no other employment situation that hires or fires people based solely on the measurement of outputs without regard to inputs. When a salesman comes back with a bad year, his manager would never say “look, I know you’re a great salesman, and I realize that your territory was unusually hard-hit by the recession, but the computer determination of your performance says that you failed, so we’ll have to let you go”.
<
p>
mark-bail says
with multiple measures. I don’t think it’s as stupid as the Boston Globe (thinks it is).
<
p>The Boston Globe has its heart set on using test scores to evaluate teachers. Instead of taking scores for what they are, i.e. a blurry snapshot of where a student is at a particular point in time, they take them as proof positive of (in)effective teaching.
<
p>A lot of this test score fundamentalism goes back to the early 20th century and IQ tests. Americans have always believed in the ability of standardized tests to render accurate portrayals of intelligence and achievement.
<
p>Like a lot of ed reformers over the last 20 years, the Globe believes teachers and students are, by nature, lazy and need incentives to teach and learn. There must be sanctions for underperformance. Unions are an impediment to the appropriate incentives.
<
p>I believe that our current model of education reform–MCAS, etc.–has more to do with proving a particular ideology correct than improving education. What’s that ideology? I’m not sure. It might be the neo-liberal belief that education should operate like a market, and that teachers and students just need the right incentives to improve. I don’t have any evidence for this hypothesis.
<
p>Bottom-line: you’re absolutely right.
<
p>Employees in business don’t see their evaluation solely in terms of a number. The subjective element–human judgement–usually enters into it.
conseph says
As we only have one chance to educate our children.
<
p>However, I agree that the use of MCAS or any other test as the end all and be all of an evaluation is, frankly, dumb. The evaluation has to take into account a variety of factors of which MCAS is but one.
<
p>The question becomes, of what elements is the evaluation comprised and what is the consequence of a poor evaluation (and what is the benefit of a good evaluation)?
<
p>I would like to see the unions take the lead on proposing just how an evaluation would look, what it would be used for, etc. I think this would be a strong avenue for the unions and teachers to show that they are not opposed to evaluations (which I think they are not and the teachers I know are for evaluations), just to the weighting of test scores therein.
<
p>Then, take a big step and remove seniority as the single largest factor in determining who is laid off and I think there would be a shift in how people view the unions. From, in my opinion, worried about themselves and their senior members to worried about the children and those teachers who are best positioned to teach them.
mark-bail says
did a very detailed proposal. Here is it is. The MTA has taken
<
p>The seniority issue, in my opinion, is largely a canard. It sounds bad, keeping the incompetent while letting the promising young teachers go. When the supply of promising teachers is limited, eliminating seniority doesn’t have much of an effect:
<
p>1. The supply problem. Some school systems struggle to fill teaching positions. They have no choice but to fill them with less promising teachers. You can lay off your worst teachers, but you’re unlikely to replace them with better qualified candidates.
<
p>An underperforming school in Springfield that I’m familiar with was recently able to fire a lot of poorly performing teachers in spite of seniority. Some of these folks would never have made it as teachers. Others might have made it with the right support. They were not effective teachers.
<
p>They’ll be replaced with either new, promising teachers who can’t find a better place to work or the bottom of the barrel. The promising teachers will move on to a better school in or out of the district. The bottom of the barrel will either accumulate or leave.
<
p>2. In a well-functioning school system, most incompetent teachers are gone before they reach professional status.
<
p>3. When you talk about choosing between mediocre senior teachers and promising younger teachers, the decision seems simple. But when you look at actual teachers, it’s harder to distinguish between slightly-below-average and average. One teacher may be good with higher-achieving students, but do poorly with lower-achieving students. Another teacher may be average with both groups. Who’s better?
<
p>Then there are qualitative differences. One teacher is an adequate teacher and beloved of the students; the other is more than adequate, but intensely disliked.
lisag says
…to the 4/19 Globe editorial:
<
p>Expecting good teachers to “routinely impart a year-and-a-half-gain in student achievement” in one year is like expecting the housing bubble to inflate indefinitely. This proved impossible for the housing market, and it’s impossible for human beings.
<
p>Children do not develop and learn on a steady upward curve, no matter how stupendous a teacher they have. My kids have had some extraordinary (award-winning) teachers and have not even made a year’s worth of gain on their watch, based on their developmental timetable and readiness to learn. In real life, an experienced teacher may lay the foundation for a big leap a year or two later. This may happen on the watch of a lesser teacher, who happened to be around to reap the benefits. Who should get the credit?
<
p>It’s amazing that anyone would listen to the advice of economists on education policy, after their poor track record in their actual area of expertise, but that’s what’s happening in education policy circles these days. Economists develop models that claim to quantify the “value added” by good or bad teachers. They claim to be able to account for human and socioeconomic variation but they can’t. Humans don’t work that way.
mark-bail says
over a year now so I can take on Hanushek’s arguments. I think there are some identifiable statistical fallacies, but I need to check my work with someone who knows more than I do.
lisag says
Rothstein has an 8-page section on William Sanders’ value-added system in Class and Schools (starts on page 63), including a bit on Hanushek. And Diane, in her book, The Death and Life of the Great American School System, quotes Rothstein and does her own VAM critique. Valerie Strauss posted a very lengthy excerpt from Ravitch’s book in her Answer Sheet blog here. Here’s just a piece of what Valerie posted:
<
p>
<
p>I’ve also just been pointed to several critiques by Matthew Di Carlo of the Albert Shanker Institute, but the Shanker Institute blog is temporarily down. Di Carlo says the 1.5 years of learning in a year idea is really a summary of the effect sizes in the earlier value-added literature.
mark-bail says
commit a regression fallacy, i.e. they extrapolate beyond their data set and assume the relationship between achievement and teacher effectiveness is perfectly linear, when, in fact, there’s no basis for that assumption.
<
p>I have great respect for Rothstein. I read Ravitch and enjoyed the book. I’m in the midst of Darling-Hammond’s Flat World. She pulls her punches on Hanushek, I think.
<
p>Many thanks for the sources. I will check them out.
lisag says
On the Albert Shanker Institute blog, Di Carlo points out that Hanushek and Rivkin (and the other economists pushing value added) do exactly what you accuse them of:
<
p>
<
p>His whole critique is worth reading as well.
<
p>Happy hunting!