Why Pollsters Keep Getting It Wrong


Why Pollsters Keep Getting It Wrong:Fraud, Technical Challenges, and Laziness
By Mike Riley, Riley Research

As a research practitioner for more than 40 years, is unfortunate to observe people’s sense that the polling industry has reached a low point. In the early days of survey research, many were skeptical as well, with questions about how a small sample of the population could possibly represent the perceptions and opinions of the entire population.

Scientific research techniques are based on the presumption that, through random sampling, every member of a targeted population has an equal probability of being selected for that sample. Even 40 years ago, we knew that was unlikely (if not impossible), yet most of the time, if the researcher did their job well, there was a high likelihood that reasonable efforts to achieve a random sample would indeed produce an accurate reflection of the larger population.

Here’s how random sampling is supposed to work: Imagine the population you want to survey is like a large cauldron of stew. There will be a wide variety of ingredients, including potatoes, meat, carrots, peas, corn, onions, garlic, and a host of spices. Now imagine blending that stew into a fine puree. By randomizing the contents, it becomes possible to draw a small cup of that stew with confidence that all of the items will be present in that cup, and in the exact same proportions as in the large cauldron. One sample after another will contain all the ingredients of the larger universe.

If the theory of random sampling works, why does poll after poll (in election after election) where the findings bear little resemblance to the outcome? Three common themes help explain these massive failures: fraud, technical challenges, and laziness.

Fraud, including malicious intent, has shown its ugly face through the frequent use of flawed polls, based on unbalanced samples and non-scientific approaches. Through voter lists, the proportions of Democrats to Republicans and other party members is readily known and quotas can be managed to produce the expected turnout. Yet, in recent years, we’ve seen numerous frequently-cited polls which over-represent one side or another. For the November 3rd 2020 election, pollsters should have been modeling 2016 numbers, while adding newly registered voters, to create a sample universe likely to project the 2020 turnout.

These kinds of design flaws are not hard to spot, which is part of why voters are becoming increasingly cynical and mistrustful. Senator Lindsay Graham was recently targeted as vulnerable, based on a fraudulent and unrepresentative South Carolina poll, designed to make Democrat donors believe that Graham and his challenger were tied. The campaign raised millions from the poll, yet Graham won as expected, by a large margin.

Technical challenges have also led to declining accuracy. In the early years, samples were as easy as highlighting names selected at random from local phone books that everyone used. In the modern era, huge numbers of voters have no location-based phone line, use only mobile communications, and don’t answer calls from people not on their contact list. These technicalities often mean that no matter the quantity of people responding to the poll, there’s are growing segments of the population the pollsters may never reach. That cauldron of stew was blended before all the ingredients were added.

Creating a probability sample today takes great effort, creativity, and flexibility. Younger people are more reachable on smartphones and online options, while seniors are often still approachable on landlines. Voters can be identified by how often they vote and most “likely voter” polls target only those with a history of showing up in election after election. Some pollsters have begun building sophisticated samples that consider more than just party affiliation and also look at characteristics such as education, geography (urban, suburban, and rural), and outreach via landlines, mobile phones, and online options. Certain voters may only be reachable through trusted sources, such as membership organizations or social media.

Trust is critical, because savvy or suspicious voters may be reluctant to participate based on the approach or the way in which the questions are framed. Regardless of the methods used, the make-up of the sample is crucial to the degree to which the results will accurately reflect the minds of the voters.

While it’s important to hear from enough qualified respondents, large samples without side-boards are meaningless at best and deceptive at worst. No information at all is better than faulty information.

Laziness on the part of both the pollsters and the media who amplify their work, has also contributed to the growing distrust. It’s been said that the media use polls like a drunk uses a lamppost: more for support than for illumination. Lazy reporters cluelessly report the results of one poll after another, as if each had Biblical credibility, abandoning common sense questions, and frequent 180-degree contradictions from last week’s poll. Too many newsrooms haven’t taken the time to understand what makes a poll credible and often equate sample size to accuracy.

A few companies were more accurate this season. Some have found ways to approach voters in more innovative ways, such as emphasizing confidentiality and offering projection techniques, such as allowing respondents to say how they think their neighbors would respond. By asking how others think is a projection technique, which provides a sense of anonymity when it comes to asking about controversial candidates or issues.

My advice: if a polling company appears on your caller ID, consider responding. If it appears the company is promoting an agenda, don’t hesitate to hang up. But if the topics seem relevant and the approach appears objective, consider participating. After all, no one wants a flavorless stew, and everyone should be entitled to your opinion.

Michael J. (Mike) Riley, Research Director
RileyResearch.com

Share