This is an archive of the old Stones Cry Out site. For the current site, click here.
« Random Musings | Main | Kerry to Sign Form 180? »
March 20, 2005
Critical Review of The Unexplained Exit Poll Discrepancy
Finally finished my "Critical Review of The Unexplained Exit Poll Discrepancy. Thanks to everyone who reviewed earlier drafts of this paper and provided comments. The PDF of the paper is available here, but please do not hot-link or re-host on another server. If you want to refer readers to the paper, please send them to this post. Thanks.
ABSTRACT: Dr. Steven F. Freeman, visiting University of Pennsylvania (UPenn) professor is not an “expert” on exit polls or the 2004 Presidential exit poll discrepancies as suggested by this UPenn press release. In fact, his paper, The Unexplained Exit Poll Discrepancy, is highly flawed. His argument that “in general, exit poll data are sound” fails having suppressed evidence and the conclusion that “it is impossible that the discrepancies between predicted and actual vote counts in” Ohio, Florida, and Pennsylvania was not substantiated statistically. Nevertheless, Dr. Freeman is right in concluding that explanations of the discrepancy to date are inadequate and Edison/Mitofsky should address the concerns of US Count Votes in subsequent analysis of their data.
Dr. Freeman wrote a book based on his research that is due out in a couple of months and has a couple of working papers in progress. If The Unexplained Exit Poll Discrepancy is any indicator of the quality of research included in these forthcoming works, I suggest that his publishers take a closer look at the manuscripts.
Posted by Rick at March 20, 2005 04:20 AM
Trackback Pings
Listed below are links to weblogs that reference Critical Review of The Unexplained Exit Poll Discrepancy:
» Steven Freeman's Flawed Exit Poll Report from Myopic Zeal
Rick Brady over at Stones Cry Out, has an expose debunking University of Pennsylvania Professor Steven Freeman's paper, "The Unexplained Poll Discrepency."
Check it out. (HT: Tapscott).... [Read More]
Tracked on March 20, 2005 10:51 PM
Comments
Nice... I was glad to see this paper. So many of these arguments were of the form of, "If I understand exit polling, then it was definitely fraud." Which is technically a true statement, except that the blogosphere drew the wrong conclusion of the two possible...
Posted by: tunesmith at March 21, 2005 10:54 PM
Rick,
As an analyst, you are very likely to be aware that it is ten times easier to take analytical potshots at someone else's work than to do your own.
Accordingly, it is now your turn to make a list of predictions. In order for your critique of Freeman's and USCV's work to rise above the level of cheering or booing from the sidelines of the issue, you need to state publicly what results will confirm or disconfirm the contentious hypothesis (that mistabulation, rather than differential participation, may be the main driver of the exit poll discrepancies in November 2004). All the required information exists, and will be publicly available as soon as E/M releases poll by poll data. To give just one of many possible examples, I would predict, in the case that differential participation *is indeed* the main driver of WPE:
In the same general geographical area, precincts in which the average voter age is older, and the interviewer age is younger, will show both less participation and more of a discrepancy towards Kerry than precincts in which the average voter age is younger, and the interviewer age is older.
Good Luck,
Webb Mealy, PhD
Posted by: Webb Mealy at March 23, 2005 09:25 PM
Dr. Mealy, thanks for your comment! First, I believe that my paper had quite a bit of analysis in it. I explained the limits of the dataset with which Dr. Freeman worked and I demonstrated that given his dataset, the differences could or could not be significant. We simply do not know from Dr. Freeman's dataset. It is too fuzzy.
About the hypothesis testing question you raised. Given that you are a PhD, you are probably aware that with classical hypothesis testing you can't really "confirm" or "disconfirm" a hypothesis. All you can do is reject a null hypothesis at some level of confidence (standard is 95%). Even this type of hypothesis testing has the possibility of Type I and Type II errors.
So, I'm curious then about your suggested test of the reluctant Bush hypothesis: "In the same general geographical area, precincts in which the average voter age is older, and the interviewer age is younger, will show both less participation and more of a discrepancy towards Kerry than precincts in which the average voter age is younger, and the interviewer age is older."
I can understand the "older," "younger," "less participation," "more discrepancy," as these are interval, but how do you operationalize "in the same general geographical area" and why? What test would you apply? Why?
BTW - Mystery Pollster might be a better venue for this discussion as his readers are much more statistically inclined and knowledgeable in polling methodology and hypothesis testing than I am. www.mysterypollster.com and look for his link to and discussion of my paper. But, if you want to keep it here, that is fine as well.
Posted by: Rick Brady at March 23, 2005 10:35 PM
Rick,
Thanks for replying. Of course you did a lot of analytical work--all of it apparently in service of shooting Freeman down.
That is relatively easy, both rhetorically and scientifically speaking. What isn't so easy is to do original analysis that results in a hypothesis as to what you think happened and how. And then to make predictions as to what the data--which you have not yet seen--will show if you are correct, and what the data will show if you are incorrect.
If you do that, then you actually put yoursef in the ring, as opposed to "booing and cheering fromt the sidelines". If you are unwilling to commit yourself, that suggests to me that you may have one conclusion that you have decided to advocate by any rhetorical means handy, for as long as possible, and no matter what the evidence is. 99% of your audience is always going to be non-experts, so they'll never know if you cherry-pick the evidence to suit your predetermined result. (Kind of like Bush, Rumsfeld, Cheney, Rice, Wolfowitz, Bolton, and various others, in re weapons of mass destruction in Iraq.)
Getting to the question you asked me, there is a simple and specific reason why the added criterion "in the same general geographical area" has to be in place: otherwise weather, above all, will confound the result.
As you are aware, the hypothesis of differential participation predicts that participation will decrease in amounts comparable to the "red shift". That is because exit poll workers select their prospective interviewees by counting a pre-determined arbitrary number of people leaving the poll, and then always approach the n-th person. The poll is intentionally designed so that participants do not self-select, nor do pollsters get to select participants to suit themselves. The implication of this procedure, to the extent that it is followed, is: if (1) the discrepancy in 2004 stems from refusal to participate by a certain number of partisans who participated in 2000, when the polls were relatively accurate, then (2) there will generally be a corresponding drop in participation wherever there is a significant variance between the exit poll result and the official result.
My preliminary studies have indicated that weather is a major factor in participation, but that no such correlation exists between exit poll discrepancy ("red shift") and participation. I told Freeman and others that I would study this, and I predicted the result correctly. See
http://www.selftest.net/Comfort-vs-Participation.pps
By all means email me if you decide to make some predictions of your own. Perhaps we could even agree on a program of predictions and tests.
Webb Mealy
Posted by: Webb Mealy at March 25, 2005 04:18 PM
Dr. Mealy, thanks again. I didn't do any analysis or testing of my own hypotheses with Freeman's data because those data are not sound for rigorous statistical analysis.
The NEP has the data though and I have said that I agree with the US Count Votes people that more testing of that data should be done (i.e., the NEP Report built one heck of a circumstantial case, but it is not likely to appease hard data and stats geeks like you or me).
One thing for you to consider in your hypothesis above: If you talk to those within the EMR/Mitofsky, they will tell you that interviewers DID NOT interview every "nth" person as instructed. In fact, WPE is correlated with within precinct sample selection rate, which EMR/Mitofsky suggests shoes that the more opportunity for interviewers to select a respondent, the larger the WPE. This suggests that interviewers did not follow directions/trainging about selecting every "nth" voter. If the nth voter refused to take the poll, or the pollster missed the nth voter for some reason, they should record the refused or missed voter's age, race, and gender and count to 'n' again before approaching the next voter. But the data where the highest WPR occurred does not show that. One hypothesis concerning these findings is that the exit pollsters substituted a cooperative voter for an uncooperative voter, and the cooperative ones were not a random "nth" voter. Also, you have to consider that WPE was correlated with training and experience of the pollster. That seems to add weight to non-random sample selection.
Anyways, I think I've convinced my stats prof to let me write a paper (for credit this time) walking through the NEP Report and systematically
identifying each claim they make about the data and suggest what types of tests could be applied to the data and what different findings based on those tests could suggest for additional analysis.
It would be a whole lot of work, but I think I could do this in a way that would be objective
and highly interesting to both sides of this debate.
I'll read your comment more carefully and I'm interested in a lot of different hypotheses that could be tested with the data held by the NEP. As I said at the start of this post - without their data, rigorous statistical analysis of Freeman's data is not possible. But, I really need to know EXACTLY how you would operationalize "geography"...
Posted by: Rick Brady at March 25, 2005 04:50 PM
Rick,
Good to hear that you are going to look into this more deeply and independently!
Unfortunately I can't answer your question "EXACTLY", until I get access to the information from E/M about exactly which polling stations were used for exit polls in each state. Speaking in general, I think I will then compare actual exit poll results to tabulated results poll by poll, correlating with relative participation in each case. I will see if some of the most obvious correlatives of the hypothesized differential participation can be confirmed.
To give an example that addresses your observation about the methodological consistency of E/M pollsters, I think the differential participation hypothesis predicts that a polling place right in the middle of a retirement community, in a hotly contested state, in which the pollster was young and inexperienced, will show a drop in participation, a skewing of participation towards a younger participant, and a corresponding significant discrepancy in favor of Kerry in the exit poll. Talking off the cuff, there is little doubt in my mind that if a 21-year-old E/M interviewer got discouraged with scowls, avoidances and turn-downs by elderly people, and started to "cheat" on the "every n-th voter" rule by selecting people he or she thought "looked nice", or "looked like they might be willing to participate", that the result could be heavily skewed towards Kerry.
If, as is sometimes the case, the same state had a range of weather conditions on the day, I would group polling places that had comparable weather for purposes of comparison, and I would not attempt to compare polling places that did not share the same general weather, with one exception. I would keep my eye out for the combination: worse weather, equal or better participation, significant red shift. That is a red flag for mistabulation. There are already a number of possible indicators of that sort at the state level (see the PowerPoint show mentioned in my previous comment), and a lot stands to be learned by getting more detail.
Webb Mealy
Posted by: Webb Mealy at March 25, 2005 08:35 PM
Dr. Mealy,
Unfortunately there is little...actually...zero chance of finding out which precincts were polled. One thing that EMR/Mitofsky absolutely nailed (according to the NEP Report) is the precinct sample. That is, when you extrapolate from the election tallies from the sampled precincts, they matched the statewide and national tallies almost perfectly. They are VERY happy with the precincts they sampled and I suspect it took them quite a while to pin down sample of precincts that was representative of the state or nation. They aren't about to disclose which precincts they polled.
If you have not read the recent paper by doctors Traugott (UMich), Highton (UC Davis), and Brady (Berkeley), I encourage you to do so. Here's a quote from their paper:
The information on the exit poll methodology is still being consumed by independent analysts, and there are now calls for the release of raw and supplementary data from sample precincts. This would include contextual data about the vote history in those areas as well as information about the interviewers. This is unlikely to happen, and for justifiable reasons. Such information would be too politically sensitive in that disclosure of the sample sites could subject the exit poll interviewing to manipulation by political organizations and interest groups on Election Day if the same sites are always chosen (p. 13).So, my future work will accept this reality and only necessitate a precinct dummy variable to protect the identity of the polled precincts.
Also, note the NEP Response to queries from Mark Blumenthal for more specific data:
They feel that if they identify the polling locations it might be possible for a computer match to identify a small portion of actual individuals in the data. Some precincts are small enough that it would be possible to identify actual voters from their demographic data. They also feel that any effort to provide a precinct level estimate of actual vote or "within precinct error" would allow a user to identify the actual precinct and, theoretically at least, identify actual voters.
I will leave it to the reader to evaluate this rationale except to say this: The protection of respondent confidentiality is not some minor technicality. It is arguably one of survey research's most important ethical bedrocks. No pollster or pollster and survey researchers should ever consider it a trifling matter.Let's just say that I buy the argument for protection of precinct identity put forth by Traugott, Highton, and Brady more than the argument that WPE or more specific info by precinct could allow identification of specific exit poll participants...
Posted by: Rick Brady at March 25, 2005 10:18 PM
Rick,
I hadn't followed the various discussions closely enough to have realized that E/M did not intend to disclose everything when they made their archive available to the public.
Be that as it may, I dissent from a couple of your statements. First, that "Unfortunately there is little... actually... zero chance of finding out which precincts were polled."
It may be a pain that E/M will not disclose which precincts they polled, but that doesn't make the information impossible to come by. I know some election officials in the particular state I have my eye on, and they may well know which precincts were chosen. Even if they don't, the poll workers, and many hundreds of people also know--not just those voters who participated. It will require some extra time, energy, and money to locate the exact precincts that were polled, but it will by no means be impossible.
Secondly, I thought--perhaps mistakenly--that part of the E/M methodology was that the polling places were randomly chosen for each election. Can you point me to statement by E/M that can correct my undestanding about this?
Finally, a propos of nothing, I'd like you to contemplate this scenario. Paperless electronic voting machines have been used in roughly half the polling places in state X in November 2004. These machines have been designed so that they scramble the order of votes cast, nominally in order to protect the privacy of voters. However, a significant effect of this is to disguise the presence of any automatic vote altering algorithm. The exit poll consortium says that in general there is twice as much discrepancy between the exit poll and the tabulated results in state X in precincts where paperless voting machines have been used. Nonetheless the exit poll consortium also will not release to the public their raw results, actual precinct by actual precinct. Its argument is that it must protect the privacy of voters against even the outside chance that someone might surmise who it was that answered a given questionnaire. It says that it is acting out of a responsibility to protect the integrity of the democratic process.
Does anything strike you as paradoxical about such a combination of circumstances?
Webb Mealy
Posted by: Webb Mealy at March 31, 2005 05:31 PM
Dr. Mealy, I'm not ignoring you, just simply crushed with work and I'm making my way through the latest US Count Votes study. Have you seen it? http://uscountvotes.org/
Posted by: Rick Brady at April 2, 2005 03:16 AM
Yes, I've seen it. I can't claim to understand the most technical aspects, however. I'm a forensic analyst, but I don't have formal training in statistics. When I've done some of the studies I have in mind, I will need to call on others (such as the USCV stats team) to confirm or disconfirm my results using statistical models.
Webb Mealy
Posted by: Webb Mealy at April 5, 2005 05:42 PM