According to Globe’s own analysis, 80% of Massachusetts middle schools aren’t making adequate yearly progress (AYP). What the Globe doesn’t tell you is that this is pretty much the case across the country.
According to MESPA, “seventy-four percent (74%) of Massachusetts schools will be named underperforming by the year 2014 under the accountability system created by the state for the federal No Child Left Behind act.”
At the moment, The Globe estimates that two-thirds of our middle schools are not meeting AYP. That’s pretty close to MESPA’s prediction for high schools.
Fairtest.org predicts that by 2014, 99% of California schools will fail under No Child Left Behind. Ninety-six percent of Illinois schools will fail. Between 88% and 93% of Connecticut schools will fail to make adequate yearly progress. Even with its relatively stringent testing program, Massachusetts comes in with more school passing.
The real question is why so many schools will fail to make adequate yearly progress. It is a question the Globe fails to honestly consider. The answer is that the law is set up to make schools fail. How? First AYP measures the progress of sub-groups, and though it’s good to consider the progress ethnic groups, non-native speakers, and special ed, an entire school can fail based on a single group of students, and if a single group of students is small enough, the failure of one or two students, can bring an entire school down. One school in my system almost had this experience.
No Child Left Behind is highly controversial, yet the Globe completely ignores the controversy. There is no mention of the fact that Congress was unable to garner enough support to renew the law. The paper’s objective is to advanced its standards-based agenda at the expense of journalistic ethics of balance and fairness.
The Globe’s intellectual slovenliness also manifests itself in its failure to reveal the methodology of its analysis or consider the possibility that their are other studies out there. The paper also goes so far as to misleadingly characterize the Thomas B. Fordham Institute as a “nonprofit that works on education policy.” It’s a conservative education think tank, fellas. There’s certainly no reason not to quote the institute’s vice president, but it is sloppy, if not dishonest, not to reveal the organization’s agenda.
Based on the Globe’s education editorials, it’s my conclusion that James Vaznis, whose name graces the byline, is carrying the reportial water of the Editors. After all, it’s the Globe’s, not Vaznis’s, analysis of middle schools that prompted the story. I may be mistaken, but Vaznis doesn’t usually report on educational news.
Do middle schools need improvement? All schools, indeed all organizations, need constant improvement. Unfortunately, the Globe doesn’t even make a prima facie case for it.
eb3-fka-ernie-boch-iii says
with a little George Kimball thrown in.
yellow-dog says
but I have no idea what you are talking about.
<
p>Mark
eb3-fka-ernie-boch-iii says
Ron Borges got royaqlly screwed BTW.
dweir says
Who said this:
<
p>I want to change this bill [ESEA and Title I] because it doesn’t have any way of measuring those damned educators….We really ought to have some evaluation in there, and some measurement as to whether any good is happening.
<
p>BRAVO to the Globe for surfacing this information! In tony Westford, between 20-30% of 8th graders are not passing the math MCAS. There is no escaping this by blaming the home environment, or low SES, or non-native English speakers.
<
p>But for that matter, Mark, I am surprised at your disparaging comments about the breakdown of results by subgroup. If I go back through everything you or fairtest wrote, I’m sure I can find something about culturally biased questions. A school in which certain segments of its population are allowed to fail is the epitome of cultural bias! I would have thought you would be in favor of this sort of accountability.
<
p>A few years back, one of our middle schools was cited for lack of improvement in one subgroup. That is how it was reported, that is how it was addressed, and that is how we improved.
<
p>Yes, the news is pretty bleak. Better face it and get to work.
<
p>At this point, the sensible approach would be to focus on elementary schools and make sure they are meeting targets. If they aren’t, there is no way the middle school targets can be met. If targets are being met, we need to reexamine where we’ve placed the goal posts. I suggest that the folly of inquiry-based instruction doesn’t begin to show all its weaknesses until you get to problems that cannot reasonably be solved by drawing pictures.
<
p>I suggest implementing Direct Instruction in all elementary schools — the most well-researched, scientifically based method of instruction out there — and give students the skills and knowledge they need in order to tackle more challenging learning.
yellow-dog says
much to respond to here. We’ve gone over this before and if you read our previous conversations, you’ll see when we get down to stating core beliefs, you tend to bow out of the conversation.
<
p>If you read my post, you shouldn’t be surprised about my comments about sub-groups because I make it clear that the attention paid to them is important. It’s the measurement that’s flawed.
<
p>Mark
dweir says
Mark, I don’t know what you mean by “core beliefs.” I know that I have repeatedly answered your assertions that educational research is in its infancy by directing you to the existence of PFT. My recollection has been that you dismiss this study. Maybe now that information is available on ERICyou might take a look?
<
p>I don’t keep bringing up PFT for jollies. I bring it up because the foundation of our educational system — the elementary schools — are failing. If students don’t master an elementary education, it is folly to think they will fare better in middle school.
<
p>As for you assertion that I “bow out of the conversation”, I’ll direct you our latest exchange. This is where I posed several questions in response to your post on Measured Progress. You did not answer those questions (see your reply titled “I can’t speak for now”). I thought the ball had been left in your court.
<
p>In that reply, you stated that the questions had been field tested for cultural bias. When you say that the measurement is flawed, are you saying that the field tests were insufficient, or are you referring to your broader complaints about MCAS or standardized testing?
<
p>Your original assertion was that The Globe story was misleading, yet you provide no evidence to the contrary. I haven’t seen the AYP data for this year. Have you seen it? A lack of cited reference did strike me as odd. It seems that quite a bit of data that I used to refer to on the MA DOESE site is gone, or perhaps only moved, specifically the processing done to raw test scores.
<
p>If by “core beliefs” you mean this:
<
p>
<
p>Then yes, you and I disagree. Corrective action is reported by subgroup.
I don’t know what you mean that they can “bring an entire school down.”
<
p>Maybe you and I can do an analysis of the AYP tech manual sometime and sort this out.
<
p>
yellow-dog says
and your comments aren’t making adequate progress. You’ve said two things that are factually incorrect and your website cites Charles Murray favorably. There is no reason to take you seriously here.
<
p>Fact Check #1: I said educational research was in its infancy in the 1960s, not now. To give it a human age, I would say it’s in its late 20’s.
<
p>Fact Check #2: As far as sabutai’s poston Measured Progress, I didn’t know and still don’t believe you were responding to me. I contributed my actual experience in the comments, but I didn’t notice you quoting my words.
<
p>Bad Judgement: I know you refer to DISTAR all the time. Yes, it was one of the few major studies conducted at over time. It has several pages in the Research Handbook on Teaching of the 1980s. It was taken seriously. You seem to be citing Charles Murray, the author of The Bell Curve, on the subject. And you’re not kidding, is that right?
<
p>Post-Hijacking: I have no interest in scripted learning. My professional and research interests are in high schools, not elementary schools. I don’t know of any scripted learning programs in high school. Forgive me, but I’m not about to read up on it just to discuss it with you, particularly when I posted on a Globe article criticizing middle schools.
<
p>You are, of course, free to comment anywhere including here, but you consistently bring up too many points that are off topic. In this case, two points are factually incorrect, and one is supported with an intellectually laughable source. You want me to be versed in your particular hobby-horse to refute you, when I don’t want to expend the time or effort.
<
p>What’s left to say?
<
p>Mark
<
p>
dweir says
You want to slam the message coming from the data.
<
p>What I’m saying is that the data are not surprising given the feeder systems middles schools are working with. This is not hijacking the post, although it’s apparently not a message you want to hear.
<
p>Fact check #1: PFT occurred during the time that you dismissed as the “infancy” of research. Perhaps that’s a red herring. You’re simply not interested. That’s a shame because there isn’t going to be much improvement in HS without corresponding improvement in MS which… well you can follow the dots.
<
p>(And just a friendly observation, it is not uncommon for the comment threads to spin off in their own directions. You should try not taking this as a personal attack. Just ignore if you are not interested.)
<
p>Fact check #2: As I said, it was our last exchange. I posted a reply to sabutai’s post and you responded. Gee, Mark you are the one who accused me of bowing out of conversations. I was just trying to provide evidence to the contrary. But even if our “conversation” did end, who cares? I don’t have any expectations. It comes with the territory.
<
p>Bad Judgement: I am not citing Charles Murray, as if that should matter. People get so threatened by performance data, except sports statistics. Pablo’s comment downthread acknowledges limitations, why don’t you jump down his throat?
<
p>No, the page I linked to points out Murray’s apparent ignorance of PFT. Yeah, I reference DISTAR, because it came out of PFT. DISTAR is a NOT a study. Project Follow Through was the study. And it showed that DISTAR was the only successful method for reaching these students.
<
p>I’m not saying this for your benefit, because as you’ve already stated, you don’t care. I’m saying it in hopes that someone does care wants an answer to the question “what can we do for are underperforming students?”, and will be interested enough to take a closer look at this study which is not by and large unknown to educators.
<
p>As for your assertion that I’m hijacking your post: I’m posting something that is relevant to the problem of low-performing middle schools. If you don’t agree, fine. Don’t respond.
<
p>
gary says
<
p>Yet, according to the Globe, many suck. The confusion is understandable.
yellow-dog says
you don’t quote my comments, you quote someone else’s in the Measured Progress post. You certainly don’t address me. Your comment isn’t juxtaposed with mine. I’ll take your word that you were addressing me, but I don’t know how I was supposed to know that. Check our converation on Standard Issue, if you care to see what I’m talking about. It may be that you just ran out of time or moved on.
<
p>I don’t take your comment as a personal attack, if anything, it was I who fired a shot across your bow. I’m just frustrated by (y)our pattern of discourse which usually starts with my post, your negative shotgun, (launching several “attacks” on one of my posts) with me responding and then trying to narrow the conversation to one or two salient points. I’m loathe to repeat the pattern.
<
p>It’s also not that I don’t “care” about PFT or DISTAR. It’s a question of how much someone should be expected to know in an argument. PFT is esoteric, whether or not you think it should be or not. That it came about during the infancy of educational research does not invalidate it. The infancy of educational research was a time when experimental research was all there was. Since then, the field has grown. That doesn’t mean experimental research is now invalid, just that other valid methods now exist.
<
p>If you like, I’ll make a deal with you on PFT. You do a post on it, and I’ll research it and respond. Chances are we still won’t agree, but at least you could education everyone on what it is you’re promoting and I can talk on the subject. If not, that’s okay.
<
p>Apparently, you disagree with the gist of my post, which is that NCLB, particularly AYP, is controversial and the Globe shouldn’t take it at face value.
<
p>Mark
<
p>
christopher says
The key is how to interpret the results. For me the main reason for a test like this is to determine whether an individual student passes or not. There are certain things you should just know before you move up a grade, or ultimately graduate. If aggregate data are used to evaluate teachers, schools, or districts, then questions must be asked about other variables before taking the results at face value.
pablo says
This is a complicated statistical problem, where the accountability measures do not necessarily match reality.
<
p>If the question is, as it should be, are children learning in a given school, the present accountability system does nothing to answer that question.
<
p>The present accountability system is more likely to catch schools with larger enrollments, with more children being tested, and with challenging populations.
<
p>The accountability system is designed to measure how many children are below 240 (Proficient) in a school or school district. It does not measure growth. If your school receives a bunch of children from another district (or from abroad) who are significantly below grade level, and you raise them through two years growth in a single year, you will be sanctioned based on the child not meeting the performance standard at that grade level.
<
p>The accountability system divides children into subgroups. If you have less than 80 children in that subgroup, you can’t get sanctioned. Thus, bigger schools with more kids (middle schools) are more likely to be in trouble.
<
p>The goal that we need to measure is achievement of graduation standards, and whether a child is making adequate progress to the 12th grade goal. Achievement, measured against a fifth grade test, is an interesting statistic but id doesn’t measure what we need to know to determine success.
dweir says
I wonder if the challenge posed by students entering the system could be addressed with another subgroup — e.g. students < 2 years in the system.
<
p>I’m a bit perplexed by your last paragraph. If ta student’s schooling is a progression to graduation, and if a fifth grade test measures where a student is at a given point in time, couldn’t it be said that we would know whether that student was on target, ahead, or behind?
<
p>Unless you’re asserting that the MA curriculum is not sequential, that what is learned in 5th grade — and how well it is learned — has no bearing on subsequent education, I don’t understand how a 5th grade test could be no more than an interesting statistic.
<
p>
pablo says
We need another metric.
<
p>We need a series of tests that are scaled across grade levels. We need to set a scale of, say, 0 for a kid who is entering first grade and 1300 for a kid who has met graduation standards, and we need to measure progress on that scale.
<
p>We need to measure how well a child moves up the ladder, not take a snapshot of where a child is on a single day in March or May.
<
p>A literacy scale already exists (Lexiles) and is widely used across the country. Books are rated on readability based on Lexiles. You can sit a child at a computer and get a Lexile score in a few minutes. This makes more sense than shipping boxes of data to New Hampshire every year, and waiting six months for a score.
dweir says
Getting data more quickly is imperative. Within a few minutes would be excellent.
<
p>Didn’t the USDoED start a pilot investigation of bringing growth models into NCLB re-authorization?
<
p>What would a growth-based accountability system look like? Would a school wouldn’t meet AYP if its student both didn’t meet grade level expectations and it’s growth rate was below cohorts?
<
p>
yellow-dog says
usefulness in diagnosis, but they are merely a readability scale based on vocabulary and sentence structure. The rationale for lexiles, if I’m not mistaken, is to match students with appropriate reading level material. It has little, if any, diagnostic value, though there are lexile computer programs that are marketed as a way to produce reports to please the powers that be. They can tell you what lexile a kid is reading at.
<
p>The problem with AYP, I agree, is largely metrics. The production of those metrics, however, is a larger problem. Right now, the state tests are required to pull triple-duty: 1) as accountability/disciplinary measures 2) as measures of individual progress 3)as instructional feedback. To some degree, the tests work for accountability purposes. However, they aren’t deep enough/don’t include enough items in the same area to reflect what students have learned and haven’t learned. In the case of ELA, open-response questions conflate reading and writing. A student could carry out the analytical task and do poorly expressing it or vice versa. And the scale of 1-4 is sort of having a ruler marked six inches and one foot and nothing in between.
<
p>Mark
pablo says
I am using the Lexiles as an example of a scale that can be used to show growth, and as a quick way to measure progress. The Scholastic Reading Inventory is a quick and easy test to measure Lexiles, but you can get a very high correlation to MCAS scores at a very low cost. This should be a baseline from which to develop a meaningful, standards based, growth-oriented assessment system.
<
p>Note that I think the open response questions of the ELA and mathematics MCAS is the heart and soul of these tests. If we can just do these questions in some sort of an online, rapid scoring test, it would be a great tool for accountability and for improving instruction.
yellow-dog says
don’t provide good instructional feedback. The rubric and the curriculum framework is too broad and encompasses too many variables to provide useful feedback.
<
p>Most of the school population clusters between 2 and 3, but that doesn’t offer much instructional guidance. A wider- scale would be more instructionally sensitive, but less effective for accountability’s sake. We’ve yet to have a student not pass MCAS by senior year. We had a junior take the English portion three times. He had loads of remediation. He barely passed and we have no idea why he passed one time and failed the other times.
<
p>I’ve been teaching open-response questions since the Ed Reform bill was passed. The kind of things they ask are, indeed, the kind of thing I think English teachers should be teaching. I’ve developed a test-taking strategy protocol that most of my department uses for approaching the questions, but aside from practicing these questions over and over again, I’ve yet to find an effective strategy for improving learning in a way.
<
p>Mark