As Florida’s cities navigate complex policy discussions and engage with residents, polling often plays an important role in shaping public understanding and decision-making. But not all polls are created equal—and knowing how to evaluate them is essential.

To help demystify the process, the Florida League of Cities is pleased to share this practical primer from Steven J. Vancore, President of Clearview Research. Drawing on decades of experience, Vancore outlines what to look for in a credible poll—from sample selection and methodology to common pitfalls that can lead to misleading results.

Whether you’re reviewing survey data, communicating with constituents, or engaging in policy conversations, this resource offers clear, straightforward guidance to help you ask the right questions and interpret polling with confidence.

Evaluating and Utilizing Polls: A Primer

Randomness and Representativeness

The first thing to assess is how the sample was drawn. Ask whether the sample was random and representative of the likely voting population.

Randomness

  • Who chose who? Did the pollster choose the respondent (good) or did the respondent choose the pollster (bad)?
  • Opt-in polls (like many internet polls) can introduce bias and unreliable results.
  • Incentivized polls may encourage participants to game the system.
  • Did everyone in the population have an equal chance of being included?

Representativeness

  • Does the sample reflect the population?
  • Is it balanced by age, gender, party affiliation, geography, and ethnicity?
  • Some pollsters use stratification to ensure balance; others use weighting after data collection.

Key Terms

Validity – How accurate the poll is compared to the full population.

Reliability – Whether the poll would produce similar results if repeated.

Sample (n) –The number of respondents.

Population (N) – The total number of possible participants.

Margin of Error – Larger samples generally have smaller margins of error. This statistic reflects natural variation, not population size, and assumes sound methodology.

  • 300 respondents: +/- 5.7%
  • 400 respondents: +/- 4.9%
  • 500 respondents: +/- 4.4%
  • 1000 respondents: +/- 3.1%

Demographics – Characteristics like age, race, gender, geography, and party affiliation.

Weighted Sample – Adjustments made after data collection to better match the population.

Polling Methods

  • IVR (Interactive Voice Response) – Automated ‘robo-polls’ suitable for short questions.
  • Panels – Large groups of respondents who participate in online surveys.
  • Text-to-Web – Text messages sent to voters directing them to complete a survey online.
  • Mixed-Method Polls – Combining multiple techniques (e.g., phone and online).

Other Considerations

  • Duration: Polls taking longer than a week may signal issues.
  • Length: Surveys longer than 12–14 minutes may cause drop-off.
  • Ballot Language: For referenda, questions should match actual ballot wording.
  • Demographics: In diverse regions, subgroup distinctions may matter.
  • Outliers: Be cautious of polls that differ significantly from others.
  • Mixed Methods: May raise consistency concerns.

Questions to Ask

  • Were respondents ‘likely voters’ or just ‘adults’?
  • Does the sample reflect the target population?
  • Where did the contact list come from?
  • Were respondents incentivized?
  • Was weighting applied, and how heavily?
  • Is the pollster reputable?
  • Who sponsored the poll and why?