Blue Mass Group

Reality-based commentary on politics.

  • Shop
  • Subscribe to BMG
  • Contact
  • Log In
  • Front Page
  • All Posts
  • About
  • Rules
  • Events
  • Register on BMG

Exclusive BMG/Research 2000 poll: Coakley leads 49-41

January 14, 2010 By David

The results are in from BMG’s exclusive statewide poll in next week’s special Senate election.  Research 2000 interviewed 500 likely voters on Tuesday and Wednesday (and we do mean “interviewed” — Research 2000 does live interviews, unlike robo-pollsters Rasmussen and PPP).  That means that our poll is the first (and so far only) one taken entirely after Monday’s final televised debate.  Here’s what they came up with (margin of error is +/- 4%).

QUESTION: If the 2010 special election for U.S. Senate were held today, would you vote for Martha Coakley, the Democrat, Scott Brown, the Republican, or Joseph Kennedy, the Libertarian candidate?

  ALL DEM REP IND
Martha Coakley 49% 82% 7% 36%
Scott Brown 41% 12% 85% 49%
Joseph Kennedy 5% 1% 2% 11%
Undecided 5% 5% 6% 4%

Particularly interesting in these numbers is the breakdown of unenrolled (independent) voters.  Brown is ahead in that group 49-36; significant, to be sure, but not the overwhelming advantage suggested in the Rasmussen (71-23) and PPP (63-31) polls that came out recently.

The full set of crosstabs is here.

Some details for the wonkish:

The Research 2000 Massachusetts Poll was conducted from January 12 through January 13, 2010. A total of 500 likely voters who vote regularly in state elections were interviewed statewide by telephone.

Those interviewed were selected by the random variation of the last four digits of telephone numbers. A cross-section of exchanges was utilized in order to ensure an accurate reflection of the state. Quotas were assigned to reflect the voter registration of distribution by county.

The margin for error, according to standards customarily used by statisticians, is no more than plus or minus 4% percentage points. This means that there is a 95 percent probability that the “true” figure would fall within that range if the entire population were sampled. The margin for error is higher for any subgroup, such as for gender or party affiliation.

Please share widely!
fb-share-icon
Tweet
0
0

Filed Under: User Tagged With: bmg, ma-sen

Comments

  1. eb3-fka-ernie-boch-iii says

    January 14, 2010 at 11:55 am

    I assume the PAC paid for the poll?

    <

    p>hey, whose idea was it anyway for you guys to start a PAC?

    <

    p>Fantastic way to spend some money. Can build credibility over time and give you guys some juice.

    <

    p>Time will tell.

    • bob-neer says

      January 14, 2010 at 12:00 pm

      The PAC idea was from divine inspiration.  

      • eb3-fka-ernie-boch-iii says

        January 14, 2010 at 12:11 pm

        I’ve never been called that before.

        • david says

          January 14, 2010 at 12:13 pm

          • eb3-fka-ernie-boch-iii says

            January 14, 2010 at 12:14 pm

        • bob-neer says

          January 14, 2010 at 12:14 pm

          The dark angel of the blogosphere. Hey, that has kind of a nice ring to it.

          • eb3-fka-ernie-boch-iii says

            January 14, 2010 at 12:21 pm

            Is there anything else available though?

            • david says

              January 14, 2010 at 12:22 pm

              • paulsimmons says

                January 14, 2010 at 1:18 pm

                Ernie knows.

                <

                p>Have the rights to “The Shadow” lapsed?

    • david says

      January 14, 2010 at 12:09 pm

      we paid for the poll out of our extravagant advertising revenues.  

      <

      p>Nonetheless, the PAC is still an awesome idea!  đŸ™‚

      • eb3-fka-ernie-boch-iii says

        January 14, 2010 at 12:13 pm

        Great move on the business side of things.
        Let’s hope over time it will gain credibility, regardless of the BMG label.

        <

        p>So don’t let Chaley stop you from posting polls that hurt your candidates. I know he’ll want to.

  2. johnk says

    January 14, 2010 at 11:59 am

    Rasmussen has used “very likely voters” instead of “likely voters” to make the race seem closer.  But then again, who votes on special elections?  I don’t think pollsters have a good grasp on what group is voting, but “likely voters” in general had Coakley at 10 – 15.  We don’t know the impact of identifying Scott Brown’s actual record, which has started this week.  

    <

    p>Plus, we don’t yet know the full impact of the Tea Party and other national republican groups who have changed this race from outsider underdog Scott Brown, to a full on nutty Republican nation candidate.

    • hrs-kevin says

      January 14, 2010 at 2:24 pm

      There is no way to tell what “likely voters” means to any particular pollster unless they tell you in detail. It doesn’t have a standardized meaning.

      • johnk says

        January 14, 2010 at 9:50 pm

        but Rasmussen seems to be splitting groups in different categories.  Not sure what questions he is asking to differentiate.   But it seems odd that he has a “very like” category.

    • karenc says

      January 14, 2010 at 9:15 pm

       Nutty Republican nation candidate with at least two brain dead moves – demanding the Obama stay out of Massachusetts (didn’t Jesse Helms demand Clinton stay out of NC?) and sending his daughters out to complain about Coakely’s accurate ad. Maybe the girls didn’t know there dad actually did sponsor an amendment that did just what he was accused of.

      • mr-lynne says

        January 14, 2010 at 9:36 pm

        … running straight up orthodox GOP conservative campaign will continue to attract enough extra-state support from the true believers to overcome the obvious deficiency that this isn’t how GOPers have won statewide in the past.

        <

        p>

        Mitt Romney won in 2002. Paul Cellucci won in 1998. And William Weld won in 1990 and 1994. What’s more, Weld almost beat John Kerry in 1996. … But the formula for winning as a Republican in Massachusetts is pretty clear-you want to be independent from the machine, and generally for lower taxes and less regulation than your Democratic opponent, but also decidedly not as right-wing as the kind of guys the GOP runs for Senate in Alabama.

        And Brown’s just not doing that. … But instead he seems to have thought of… nothing at all besides putting a slightly moderate spin on orthodox conservative views.

        • mr-lynne says

          January 14, 2010 at 9:37 pm

          … this is the correct link.

  3. jamesdowd says

    January 14, 2010 at 12:06 pm

    We don’t need a Nader here.  

    • marcus-graly says

      January 14, 2010 at 12:24 pm

      I honestly have no idea.  Joe is more right leaning, but I think some left leaning folk how would otherwise hold their nose and vote Coakley are choosing him as a protest vote.

  4. stomv says

    January 14, 2010 at 12:48 pm

    N=500, MOE=4%, Coakley-Brown=8%

    <

    p>which means, within the 95% confidence interval, they might be tied.

    <

    p>A bigger N would have shrunk the MOE.  A MOE of 3.5%, with the same percentage, would mean that there is a 95% probability that Coakley beats Brown (not 95% Pr that Coakley beats or ties).

    <

    p>Oh well.  More N is more money.  I understand that you guys are trying to get polls, and have limited resources.  Still, N=500 is definitely on the low end of good polling… I wonder if you could make a deal with DailyKos, 538, etc to see if they would spring for an extra 100 each.  That sort of thing.

    • sco says

      January 14, 2010 at 1:08 pm

      500 is right in the sweet spot of good statewide polling.  You’ll typically get 400-600 and going up to 1000 in a statewide poll is much less typical.

      <

      p>Yes, within the 95% confidence interval they might be tied, but each outcome in the 95% confidence interval is not equally likely.

      • stomv says

        January 14, 2010 at 1:36 pm

        Per pollster, at the time of this post:

        <

        p>Rasmussen 1000 LV
        Mellman Group (D-DSCC) 800 LV
        PPP (D) 774 LV
        Boston Globe/UNH 554 LV
        Rasmussen 500 LV
        Suffolk 600 RV
        WNEC 342 LV
        Suffolk 500 RV

        <

        p>The mean: 633.75.  Only one poll of the eight has fewer than 500, which seems to reinforce my claim that N=500 is definitely on the low end of good polling.  Another two hundred (as I suggest) would put the number polled in 4th out of 9, the middle third.*

        <

        p>I didn’t claim the poll was invalid — just that polling a few more hundred would lead to more robust results (and a smaller MOE, which helps better understand reality).  I think the other polls in MA reinforce my claim, and implicitly that you are lowballing a smidge.

        <

        p>

        <

        p> * of course, so would another one polled (501 would be outright 6th).

        • sco says

          January 14, 2010 at 4:00 pm

          Another few hundred respondents = another few hundred * 1+refusal rate calls

          <

          p>All for about a half point of MoE.  

          <

          p>It really confuses me why people are so obsessed with the MoE.  All it is is two standard deviations away from the mean number in the survey (the one that gets all the attention).  You don’t expect something to be two standard deviations away with equal probability as something one standard deviation away or something close to the mean.  

          <

          p>You could easily say 2/3 of the time, the election will be plus or minus 2, rather than 95% of the time it will be plus or minus 4.  And if you really want to worry yourself, 97% of the time it will be plus or minus 6!  There’s your dead heat!

          • sco says

            January 14, 2010 at 4:03 pm

            I meant 99.7%, obviously.

        • hoyapaul says

          January 14, 2010 at 4:13 pm

          As sco points out, 500 respondents is fine for state polling. I’d add that the Research 2000 poll BMG commissioned was done by live interviewers, and was not a robo-call like Rasmussen (where it is easier and cheaper to get a higher N).

          <

          p>Of course, if some of the criticism about robo-calls is correct (especially the problem of selection bias), then getting the higher N involves a trade-off by sacrificing accuracy of another sort. In that way, it’s not necessarily the smaller MoE that “helps better understand reality” — it’s the techniques and voter screens you use.

      • stomv says

        January 14, 2010 at 2:00 pm

        Pollster Franken v. Coleman

        <

        p>Number: 55
        Mean: 688
        Median: 625
        StDev: 263
        Range: 300-1572
        Number <500: 5
        Number =500: 14
        Number >500: 36
        Number 400-600: 22
        Number <400: 2
        Number >600: 31

        <

        p>Based on Minnesota’s 55 polls for the US Senate, in a state which seems to be somewhat similar to MA in population, 400-600 doesn’t seem to be normal.  500-700 does however, and includes the mean as a not too important bonus.

        <

        p>Number <500: 5
        Number 500-700: 35
        Number >700: 15

        <

        p>I didn’t cherrypick this particular sample — it was just the first Senate race of 2008 I could think of which was competitive enough to get lots of polls.  Of course, the expectation of close polling may encourage higher samples, precisely to reduce the chance that there is a “tie” within the MOE.

        <

        p>

        <

        p>Think it’s fair to say that 400-600 is too low; that 500-700 is probably a more appropriate range for describing what is typical?

        • sco says

          January 14, 2010 at 2:14 pm

          “I think one of the molars on this pony might have a cavity.  Thanks anyway,  mister.”

          • stomv says

            January 14, 2010 at 2:25 pm

            My interest is in:
            1.  Getting good polling, and
            2.  Being creative on how to get it.

            <

            p>To address 1, I point out that N=500 is low.  You disagreed.  I came back with some data that reinforces my claim.

            <

            p>To address 2, I provided what I think is a creative way of getting ponies with good dental hygene.

            <

            p>

            <

            p>Instead of an academic discussion about a good sample size — which is what I expected from you — you come back and employ sure a lot of words to say I’m ungrateful, completely ignoring my critique of your 400-600 claim.

            <

            p>Bah.

            • sco says

              January 14, 2010 at 3:38 pm

              I had a dentist appointment.

              <

              p>Yes, a bigger N is better, but you get really diminishing returns as N increases.  The margin of error going from 500 to 1000 moves from roughly 4 to roughly 3.  If you’re going to use MoE as your standard of what makes a poll good, you’re not really making that much of a difference.

              <

              p>They used to do statewide polls at N=400 almost exclusively as it’s on the low end of numbers you can get useful results from, but now as these things have gotten cheaper, I concede that is no longer the case — particularly as robopolling has become more accepted by the mainstream.  Shame on me for becoming an old man while I wasn’t looking.

              <

              p>I would argue that there’s nothing mathematically wrong with a sample of 500 to get top-line numbers.  If you want low MoEs in your crosstabs, however, you’ll want to go a little higher.  That’s where the larger sample size starts to pay off because suddenly you have statistically significant subsamples work off of.

              • stomv says

                January 14, 2010 at 4:00 pm

                I had my teeth cleaned today at 10am.  Lots of scraping.  Strange coincidence.

                <

                p>It’s true that there are diminishing returns.  I didn’t want to nit-pick, but the actual MOE of this poll is more like +/-4.3% and to get it down to +/-4.0% you’d need N=600.

                <

                p>

                The margin of error going from 500 to 1000 moves from roughly 4 to roughly 3.

                <

                p>Indeed — for p = .5 it goes from 4.38% to 3.10%.

                <

                p>

                <

                p>Depending on which cross-tabs are interesting, N=800 gets you gender and unenrolled, N=1100 (roughly) gets you D and N=3000ish gets you GOP (only 11.6% of population).

                <

                p>

                <

                p>Final question for The Editors: how does pricing work?  What would it have cost for an additional N=100?  Is it like a base fee plus an additional cost per N+1?

                • david says

                  January 14, 2010 at 4:03 pm

                  Not going to get into that here.  Email me if you want.

                • sco says

                  January 14, 2010 at 4:14 pm

                  And you’ve doubled your work for what?  A whole point and a quarter of MoE?

                  <

                  p>Question wording and order have more of an effect than that.  And once you start talking about the effect and design of likely voter screens, it’s basically noise.

                • sco says

                  January 14, 2010 at 4:46 pm

                  All the MoE gains in the world pale in comparison to the fact that we really have no idea who’s going to drag themselves out to the polls the day after a long weekend in the middle of January.

                  <

                  p>The population we’re trying to get a representative sample of does not even exist until election day.  This is a problem with every political poll, but it is more acute in this election than it would be in November.

                • stomv says

                  January 14, 2010 at 5:35 pm

                  because if you don’t believe the crosstabs, you can re-weigh the poll to your preference.  Larger N means larger N for crosstabs which means your re-weighing will be more robust.

                  <

                  p>This particular poll has N=92 GOP (18%).  If you think that the GOP will make up 30% of the turnout, you can re-weigh… but the problem is that your magnifying the proportion which is based on a sample of only 92… you’re amplifying the error.

                  <

                  p>As you know, a larger N means a larger N for the crosstabs, which means the reallocation is more robust.

                  <

                  p>This is great because… if you don’t like the turnout sample that R2K is using, you can use their polling responses and reweigh to what you think it ought to be.

                  <

                  p>

                  <

                  p>I really think that Internet + blogging + open source calls out for larger N because it allows for re-weighing.  It frees individuals from being restricted to just the top line of the poll… the “source” is open for us to tweak.  A larger N allows for those tweaks far more robust.

                • sco says

                  January 14, 2010 at 8:21 pm

        • hoyapaul says

          January 14, 2010 at 4:18 pm

          it’s important not to compare apples and oranges here with the sample sizes. The Pollster list includes some surveys done with live interviewers as well as those done with robo-calls (Rasmussen, SurveyUSA). It would make sense to compare the N for all of the live surveys only. If you did, I’m certain you would find a lower mean, probably in the 500s.

    • david says

      January 14, 2010 at 1:10 pm

      lots of n=400 polls.  The 400-500 range seems pretty normal to me.  Rasmussen and PPP can do much bigger numbers because they’re robo-pollers.  You pays your money and you takes your chances.

      <

      p>And yes, the result of a tie is barely within the margin of error.  So what.  The numbers are the numbers.  If they had polled 800, maybe it would’ve come out 49-42, in which case, same result.

      • stomv says

        January 14, 2010 at 1:42 pm

        and I don’t mean to sound ungrateful.  But:

        <

        p>

        lots of n=400 polls.  The 400-500 range seems pretty normal to me.

        <

        p>See above.  I’m not sure what “pretty normal” to an expert in law and opera means with respect to sample size in state polling.  The sample of polls in MA for this race shows that N=500 is on the low end of things.

        <

        p>

        Rasmussen and PPP can do much bigger numbers because they’re robo-pollers.  You pays your money and you takes your chances.

        <

        p>Indeed.  There’s little evidence (per Nate Silver and others) that robopolling has less reliable results.  In any case, if paying more money improves ones chances, perhaps it’s worth it.

        <

        p>

        And yes, the result of a tie is barely within the margin of error.  So what.  The numbers are the numbers.  If they had polled 800, maybe it would’ve come out 49-42, in which case, same result.

        <

        p>No, not same result.  We’d have a higher confidence that the race was tighter, instead of what we have now (a lower confidence that the race is less tight).  Not at all the same results.

        <

        p>

        <

        p>I must say, I’m a bit bummed you didn’t bite on my suggestion.  Since you’re using the same polling outfit as kos and since he’s really big on polling, I think he might go for the idea of adding to the “N” of state blog polls.  Sort of a partnership thing.  To my knowledge 538 doesn’t do this sort of thing (commission their own polls), but they’ve gotten big enough that Silver et al might be willing to subsidize some polls if they feel there is a lack of good polls for that particular race.  Getting multiple blogs with different audiences to “chip in” and help raise the N for polls (or the frequency of polls) seems like a pretty reality based partnership idea to me…

    • marcus-graly says

      January 14, 2010 at 4:44 pm

      Most errors in polling are not due to statistical error, but due to sampling error.  Sampling error is also much harder to measure.

      <

      p>To clarify, the 95% confidence interval assumes a uniformly random samples of voters.  However, that’s almost impossible to get, so pollsters use various weights and so forth to simulate a random sample.  Mistakes in this process are what causes polls to be far more inaccurate than any error caused by statistical variance.  

  5. historian says

    January 14, 2010 at 12:55 pm

    Interesting–key is still, or course, getting out the vote.

  6. stomv says

    January 14, 2010 at 4:08 pm

    And Daily Kos is following this Blue Mass Group poll up with one of our own.

    <

    p>This is kind of what I was getting at.  Kos is even going to use the same firm (contract with R2K), so I’m not sure why he’d expect different results.

    <

    p>My question for cos, others: is it better to have two N=500 polls within a few days of each other at this point in the race, or better to have one N=1000 poll?  This isn’t a hypothetical — methinks that if the case is made that a single N=1000 poll is better (for cross tabs, lower MOE, whatever reason), then there is a real opportunity for state and national blogs to combine resources to get a better final product.

    <

    p>Thoughts?

    • sco says

      January 14, 2010 at 4:17 pm

      One out of every twenty polls (on average) is going to be outside the ‘true’ 95% confidence interval anyway.  Better to have two polls that either reinforce each other or are wildly different than a single poll with a large N.

      • stomv says

        January 14, 2010 at 4:23 pm

        I don’t know that the kos poll will ask the same questions, but it will almost certainly be done with the same technique, since it’s the same polling firm.

        <

        p>Maybe I’ll ask the question this way: if you have a poll with N=1000 and you split (randomly) it into two N=500 polls, are you better off?  Isn’t that the same question, given that we’re controlling for polling method (same firm)?

        <

        p>To control for everything — question, method, firm, etc: better to have two N=500 polls, one N=1000 poll, or (c) doesn’t matter because the N=1000 poll is two N=500 polls?!?!

        • sco says

          January 14, 2010 at 4:38 pm

          If I reach into a bag of different colored marbles and grab a handful, I’m not always going to get the same handful every time, even though I’m using the same technique (stick hand in bag, grab as many as will fit).

          <

          p>To answer your question about which is better two N=500 or one N=1000, with the assumption that they’re both taken by the same firm on the same day, I would say it depends on what you’re after.  If you want to break down gender/age/region/etc, then the larger one is a better bet since your subset N’s will be bigger.  If you don’t care about that, then having two polls could either self-reinforce or show you that there might be something wrong (that is, non-representative) with your sample.

          <

          p>But that’s just my opinion.  An old man like me has been conditioned to think of these things as prohibitively expensive, so large sample sizes strike me as extravagant.

    • shillelaghlaw says

      January 14, 2010 at 4:31 pm

      At least in terms of tracking. We can see what effect Coakley’s going on the offense or Ayla’s whining has had.

    • hoyapaul says

      January 14, 2010 at 4:32 pm

      My question for cos, others: is it better to have two N=500 polls within a few days of each other at this point in the race, or better to have one N=1000 poll?

      <

      p>It’s probably quite a bit better to get two N=500 polls than one N=1000 poll, especially at the late stages of a race like this one. Theoretically, if you took two identically worded and screened polls at the same time, both including a sample of 500 respondents, 95% of the time the numbers of each candidate will be within 4% of the other poll — but much more likely, within $2 of each other (or closer). Essentially, this is what you’re doing when you do a N=1000 poll.

      <

      p>Having two polls — even at N=500 — a few days apart is more valuable, however, because this is the stage of the race that preferences change quickly (as voters get tuned in). I’d rather have two smaller polls at different times than a larger poll capturing one moment of time.

  7. david says

    January 14, 2010 at 4:54 pm

    Our friends at RMG are working overtime these days.  Within an hour of this post going up, they had one by Mike Rossettie “debunking” Research 2000 as a respectable polling organization, claiming that R2000 routinely shows Dems doing better than they should.

    <

    p>Of course, the post is bullshit — a brilliant exercise in cherry-picking results to fit your theory, and ignoring others.  Responding in detail is frankly not worth my time.  But here are a couple of things to consider.

    <

    p>First, of the 14 polls that Rossettie chose for his post, R2000’s result in 10 of them was within the margin of error of what actually happened in the election.  Guess what, Mike — they call them “margins of error” for a reason.  If a poll result is within the margin of error of the actual result, the poll was right.  So you can toss those 10 (in several of which, by the way, R2000 had the R overperforming) right out the window.

    <

    p>Second, of the remaining four, in one of them (NM-Sen) R2000 showed the Dem doing worse than he actually did.  And of the other three, sure, R2000’s result was outside the MoE of the actual result.  Doesn’t mean they were “wrong,” since things often change quickly before an election.  And even if they were wrong, so what?  It happens to all pollsters — again, they call it a 95% level of confidence (rather than a 100% level) for a reason.  And by the way, Rossettie of course doesn’t mention that R2000 was one of the very few pollsters in the country to have accurately predicted what was going to happen in another highly-contested, teabagger-infested special election, namely, NY-23.  (Amusingly, in that race, PPP had an enormous 1,754 likely voter poll that showed Hoffman up 17 points.  Oops.)

    <

    p>Methinks Mr. Rossettie doesn’t actually know very much about how polling works.  I recommend to him our excellent summary, from which I suspect he will learn something if he bothers to read it.

  8. lightiris says

    January 14, 2010 at 7:02 pm

    I’ve heard a lot of buzz from people who were briefly enamored of Mr. Brown but have since come to their senses.  Given the media scrutiny  of his inconsistencies and, shall we dare say it, hypocrisy, I suspect the flirtation has reached its zenith.  

    • shillelaghlaw says

      January 14, 2010 at 7:07 pm

      Yeah, she’s ahead by 8 points, but this is the first poll that puts her under 50%.

      • stomv says

        January 14, 2010 at 7:11 pm

        but shows that nobody should feel good about winning this race.

        <

        p>I still think Coakley wins by a 15 point margin, but that’s only if the Dems work hard through Tuesday and there isn’t a major gaffe/expose.  Either of those assumptions fail and it’ll be a squeaker.

  9. ryepower12 says

    January 14, 2010 at 9:13 pm

    but this is a great use of PAC resources đŸ˜‰

    <

    p>I loves me some polls, though unfortunately they don’t come cheap, but they’re well worth it (not mention both fun and newsworthy for BMG).

    • charley-on-the-mta says

      January 14, 2010 at 9:24 pm

      It’s to support candidates. News on that soon … perhaps after Tuesday.

      • bob-neer says

        January 14, 2010 at 10:34 pm

        Just incidentally. đŸ™‚

  10. the-caped-composer says

    January 14, 2010 at 11:15 pm

    . . . but, a new Suffolk poll has Brown up by 4.

    <

    p>http://www.bostonherald.com/ne…

    <

    p>Gulp.

    <

    p>We got more work to do . . . and I don’t have a good feeling about this . . .  

Recommended Posts

  • No posts liked yet.

Recent User Posts

Predictions Open Thread

December 22, 2022 By jconway

This is why I love Joe Biden

December 21, 2022 By fredrichlariccia

Garland’s Word

December 19, 2022 By terrymcginty

Some Parting Thoughts

December 19, 2022 By jconway

Beware the latest grift

December 16, 2022 By fredrichlariccia

Thank you, Blue Mass Group!

December 15, 2022 By methuenprogressive

Recent Comments

  • blueeyes on Beware the latest griftSo where to, then??
  • Christopher on Some Parting ThoughtsI've enjoyed our discussions as well (but we have yet to…
  • Christopher on Beware the latest griftI can't imagine anyone of our ilk not already on Twitter…
  • blueeyes on Beware the latest griftI will miss this site. Where are people going? Twitter?…
  • chrismatth on A valedictoryI joined BMG late - 13 years ago next month and three da…
  • SomervilleTom on Geopolitics of FusionEVERY un-designed, un-built, and un-tested technology is…
  • Charley on the MTA on A valedictoryThat’s a great idea, and I’ll be there on Sunday. It’s a…

Archive

@bluemassgroup on Twitter

Twitter feed is not available at the moment.

From our sponsors




Google Calendar







Search

Archives

  • Facebook
  • RSS
  • Twitter




Copyright © 2025 Owned and operated by BMG Media Empire LLC. Read the terms of use. Some rights reserved.