Polls are a record of public opinion. Here are polls taken on Monday, November 7, the day before election day, from very reliable organizations.
- Clinton 44%, Trump 41%, Johnson 4%, Stein 2% (Bloomberg)
- Clinton 44%, Trump 39%, Johnson 6%, Stein 3% (Reuters/Ipsos)
- Clinton 45%, Trump 41%, Johnson 5%, Stein 2% (Economist)
- Clinton 48%, Trump 44%, Johnson 3%, Stein 2% (FOX News)
What happened?
Polls have been a part of elections since the country was founded. The language of the Declaration of Independence requires we function with “the consent of the governed.” But this election shook up a lot of things. One of them was our faith in polls.
Should we conclude polls and the people who conduct now don’t know what they’re doing? Or, is it that good analysis is always depends on quality data and a sound methodology.
Judge for yourself. Here are 10 very real reasons polls get it wrong.
- SAMPLING: Probability sampling is the fundamental basis for all polls. The basic principle: A randomly selected small sample of a population represents the attitudes, opinions and projected behavior of all people. But random samples almost never occur organically.
- SAMPLE RESPONSE RATES. For example, women and older Americans tend to answer the phone more often. This is how most polls are still conducted. This throws off the sex and age ratios of the sample. Instead of relying exclusively on random number dialing, pollsters take the extra step of adjusting or weighting results to match the demographic profile of likely voters.
- NON-RESPONSE RATES: Adding to problem of creating a random sample, response rates are way down. In 1997, Pew Research, a very well respected research and polling organization, saw telephone response rates were 36%. By 2012, Pew reported a downward trend to an average response rate of 9%.
- WEIGHTING: Since it is virtually impossible for a company conducting polls to expect a random sample much less that participants even answer their phones, weights are assigned to demographic characteristics of the total sample of respondents to match the latest estimates of demographic characteristics available from the U.S. Census Bureau. Weighting has a major impact on the results of polls.
- CENSUS RESULTS: Census results reflect hard facts such as age, race, address and family size. They do not reflect characteristics like religion and group affiliations. Beliefs and values that are more likely to determine people’s actions.
- BRADLEY EFFECT: We don’t always say in polls what we do. It’s called the Bradley Effect, after Tom Bradley, an African-American candidate for governor of California in 1982. Polls incorrectly predicted he would win. Looking back, experts think that’s because people told pollsters they would vote for Bradley, even though they didn’t plan to, in order to avoid sounding racist.
- PHONE SURVEYS: The majority of political polls are still surveys done by phone. That’s because someone’s email is more private and protected than their phone number. Surveys conducted over the phone are a pretty antiquated way to conduct research in the computer age. On the phone, the Bradley effect is more likely to occur than online because someone else is hearing and recording your answers. CNET reported Trumps polls a lot better online than in a polls conducted over the phone.
- GROUPS: Census numbers can tell us how many Asian-Americans live in a particular state. They can’t reliably tell us how many conservatives or evangelicals are in that state or groups that systematically exclude themselves from polls at higher rates than other groups. There’s no easy way to fix the problem and know the group that someone belongs.
- MULTIPLE AFFILIATIONS: Even if pollsters could reliably align weighted samples with groups, none of us are singularly dedicated to one group. We have multiple affiliations. We belong to a particular religion, participate at a certain level in community affairs and have specific views on the environment. So, even if polls could accurately correlate Census information with groups, there are multiple factors and sub-segments to consider.
- EXIT POLLS: In any race, there is a fascination with who is likely to be the winner. So there are exit polls to gauge how the race is going. They’re usually based on a sample of a few dozen precincts or so in a specific state, sometimes not even including many more than 1,000 respondents. Like every other type of survey, they’re subject to a margin of error because of sampling and additional error resulting from various forms of response bias.
Did these reasons explain to you how polls get it wrong? Does your organization need guidance understanding data and its results?