What’s Your Opinion of Opinion Polling?
The founder of our Stones Cry Out group blog, Rick Brady, was much more a student of polling than I, but I wanted to give a few observations of my own, such as they are.
The science of polling the general public has had its good and bad times, and it appears it’s going through one of those rough patches at the moment. Mark Olson, who also blogs here, refers to polls as “cricket races”; basically a snapshot of where things are in a particular race, that has as much bearing on our lives as a race amongst crickets. If it’s a slow news day, release the results from a poll, and call it news.
Some might put the word “science” in the phrase “science of polling” in scare quotes, not convinced that it’s much of a science at all. I do have some respect for those whose lives are in various statistical occupations. It seems like a black art, but, for example, one pharmaceutical client I worked for years ago had a Quality Assurance group that tested the products coming into the warehouse before they could be shipped out, and they explained quite a bit to me. I couldn’t relate what they said now – I really can’t remember it all – but basically, given a good random sample, they could give you a good reading on whether or not the batch that just came in was good enough to ship out. Yeah, the only way to be totally sure was to test it all, but to get close enough to 100% sure without going overboard, there was a lot of science backing up their procedures.
Sampling people, on the other hand, is nowhere near as straightforward as sampling pharmaceuticals. People can say one thing, and yet do another. Which apparently happened in a big way over in the UK recently, when the conservative Tories trounced the liberal Labor Party in national elections, gaining their first outright majority since 1992. This even though Nate Silver, the US polling expert, had a look at all the UK polls and proclaimed that a Tory win of a majority of seats in Parliament was “vanishingly small when the polls closed – around 1 in 500.”
So much for that prediction. But the predictive value of polls is lessened when the pollsters themselves hide some of their results. It happened in the UK, and it happens quite a bit, apparently. No pollster wants to publish results that wind up being way out of line with those from other polls. No one wants to be the outlier, but that’s what happened in the UK. A last-minute poll by one group got the percentages virtually dead on to what the voting results were, but they didn’t publish it, “chickening out”, as the group’s CEO explained. It’s a herd mentality that we see in news coverage as well.