Is the survey mechanism broken?

Few would argue that it hasn’t been damaged by a number of factors in recent years. With that in mind, I read a May speech by Scott Keeter, director of survey research for the Pew Research Center, delivered in his capacity as president of the American Association for Public Opinion Research.

The foundation of statistically sound survey research is that every member of a population has an equal chance of being included. Yet survey professionals have gone from expecting 30 to 50 percent response rates to single digits. So whose voice might be missing and can surveys still produce representative results?

Keeter explores the importance of other emerging methodologies. “Whether we are talking about opt-in internet panels, which have been around for a while, or the non-survey methods such as automated content coding of social media, the integration of data from what has been called the ‘internet of things,’ and from so-called big data more generally, these have drawn interest and resources away from traditional surveys. “

Researchers have never had so much access to so much information about opinions, attitudes and behaviors as they do today – data that can be linked to surveys for richer interpretations and more robust findings. However, such social data has not received the level of scrutiny that has long characterized survey research.

If we must edge away from the probability model to explore the new frontier, we are right to worry about whether our findings are representative, valid and reliable. Bad data can drive out good, especially when people don’t distinguish between fact-checked reporting and the on-the-fly opinion postings.

“Information not only informs policy making, but serves as a political weapon,” says Keeter. “Perhaps it always has, but I have a sense that bad information, whether it’s junk science, economics or polling data, is now more widespread.”

Frank Ovaitt is President and CEO of the Institute for Public Relations.

Heidy Modarelli handles Growth & Marketing for IPR. She has previously written for Entrepreneur, TechCrunch, The Next Web, and VentureBeat.
Follow on Twitter

12 thoughts on “Are Surveys Broken?

  1. Thanks for starting this discussion, Frank. Some things have not changed since I was the opinion research guy at Illinois Bell in the 1980s. Employees still are being oversurveyed and academicians still think student research subjects are representative of normal humans.

    Years ago the invention of desktop publishing resulted in the proliferation of bad amateur graphic design, and it’s not surprising that wonderful tools like SurveyMonkey are having the same effect on research.

  2. What a great discussion!

    I considered Blogging about it and sending it to my mailing list, but it then occurred to me that very few of the folks who follow my Blog or who are on my list would understand what we’re writing about. Dave Dozier talks about grad students using inferential statistics on data generated from convenience samples. One would think that at least these grad students would know better. But how many practicing PR professionals understand what a random sample is or why it’s important.? How many know what inferential statistics are, even when they are quoting them?

    While, cost is definitely a driving factor, as is an admirable desire for knowledge about whatever business issue being explored, so, too, is ignorance among information providers and decision makers of research methodology.

    Standards are a great idea. But how do we educate practitioners to the need for them? Or do we? Perhaps we should, instead, encourage general practitioners to leave survey research to the professionals. But, if we do that, cost will likely be the driving factor until it becomes clear that bad research can lead to bad and expensive business decisions.

    I’d like to hear ideas on this issue, because it strikes me as somewhat intractable.

  3. On a little less lofty note, The last time you had your car in to the shop, especially if it was a car dealership, there is a little sign by the cash register where you check-out that says – usually written in magic marker – “Be sure to fill out your customer service report form correctly !!!” Then the form shows you how.

    Should you fail to fill this form out, you’ll get telephone calls and emails from your dealer, virtually demanding that you rank everything “satisfactory” or higher, and if anything isn’t satisfactory to call them immediately so they can make it satisfactory.

    J.D. Power & Associates touts themselves as the “voice of the customer.” Here’s a brief clip from their current website. According to their 2012 customer service index study, “vehicle owners who visit dealer facilities for service are considerably more satisfied with their experience than with service from independent facilities.” Well duh!

    But here’s the kicker. “Among customers of the dealer facilities, overall satisfaction with the service experience averages 38 points higher on a 1000-point scale, compared with non-dealer facilities.” J.D. Power, of course, attributes these steady increases in customer satisfaction to higher vehicle quality, and longer intervals between recommended service visits, and other factors. I attribute it to that feisty little sign on the counter that says if you fill this out wrong, you’ll be punished.

    One other thing, you clearly hire J.D. Power to get a positive result. Have you ever seen them publish a negative result? “Truth” based on these push tactics is complicated and often deceptive. Next time you get on an airliner with the J.D. Power sticker welcoming you on board you have to wonder what was done to generate the advertised results. . .or maybe your should try another airline?

  4. Response rates can be a problem, but there are ways–albeit expensive ways–to ensure both random sampling and valid and reliable analysis from a targetted population. The problem is that many surveys never provide the reader the information necessary to interpret the results–such as how many actually respond to the particular (set of) questions, instead providing the number of people contacted. Further, caveats as to whether the results can be generalized or are specific to a particular sample need to be explicitely stated.

    Surveys are only as good as the amount of time put into establishing the population, sampling frame, how respondents will be contacted (list error), and the correct use of psychometrically-appropriate measures. Using a 5- or 6-point contact system has provided a consistently high response rate of between 40 and 60% for me and my students. Costly, yes; you get what you pay for.

    If the survey itself is not psychometrically sound, the all the random sampling in the world won’t provide reliable and valid conclusions.

  5. Both Forrest and Don have indicated cost as a major barrier. I wonder if we are not knee jerking here.to avoid the dire process of rethinking our role in research.

    In my view the whole opinion and market research industry needs to rethink itself from scratch and all of its clients (including ourselves) need to advocate and support this overhaul also by behaviours:
    i.e. never use research for the sole purpose of putting a couple of power points in your board presentation… if research is not geared to give you new insights that disprove (rather than confirm) your opinions..that yes is a waste of money…

  6. This lively discussion is greatly needed.

    I agree with what my esteemed colleagues already have contributed above, but I also understand why some have gradually drifted away from pure probability sampling in public relations, communications and other aspects of social science research. Today’s reality is that it is becoming increasingly more difficult to interest people in becoming survey research subjects. Sometimes this happens because people make surveys too long and complicated. Other times it happens because of confusing and poorly written questions. We also know there are far too many surveys out there and not enough people are interested in completing them. Consequently many – including me sometimes – are guilty of trending too far in the direction of a convenience sample.

    Forrest Anderson’s mention about the serious misuse of user-friendly survey research programs such as Zoomerang and SurveyMonkey also should be noted carefully. Too many people who know too little about research methodology are using these programs to create flawed data through the use of double-barreled questions, flawed response scales, biased questions and a host of other problems.

    In the end, of course, in many cases it all comes down to budget. I’ve often been puzzled about why we’re so quick to cut corners in survey research that we would never consider doing in other aspects of life. I’ve know people who would never consider trying to fix their own automobiles or even do their own yard work, refuse to hire someone to conduct surveys for their company because they think they can do this themselves via computer-based programs.

    All of this should echo David Michaelson’s call for standards in public relations research, measurement and evaluation. It is good to see the Institute for Public Relations beating the drum for such standards through this blog and in a number of other ways.

  7. I agree with David, Toni, and Forrest. Unfortunately, the problem has also spilled over into academe. I sit on various thesis committees with colleagues holding doctorates from top universities with advanced training in survey research methods. Often, graduate students will use convenience samples to collect data and then use inferential statistics to analyze and report data. I always insist that this (essentially misuse) of inferential statistics be discussed as a limitation of the study. I feel I’m over-compromising; my preference would be to treat such studies as “quantitative case studies” and not pretend to make inferences from samples to populations. One junior colleague suggested that I seem to have a “thing” about probability sampling. I guess I’m old fashion. I think that probability sampling is an essential prerequisite to the legitimate use of inferential statistics.

  8. Great post and discussion!

    I agree with both Toni Muzi Falconi that behavior is what clients want to know about and predict and with David Michaelson that much of the “survey research” being done is severely flawed methodologically.

    As with everything, I believe cost is a driving factor. We used to complain that most clients would not do survey research because phone, mail and door-to-door surveys were too expensive. Only the big players making big investments had the resources to use research to drive and evaluate their communications and other efforts. And generally, research professionals did this research.

    Then the Internet came along and now anyone can do all sorts of surveys for virtually nothing if they’ve got and e-mail list to send the survey to. Now, when someone’s boss asks him or her a question that can be answered with data, he or she can run to SurveyMonkey, or something like it, do a survey and come back to his/her boss with a “data-supported” answer. Never mind that he or she has no training in question or questionnaire formulation, sample selection, statistics or other data analysis techniques. And most likely, the boss who gets the answer won’t question the research skills of the person who delivers the answer.

    I recently worked with a client that was trying to get internal surveying under control. In the course of one year they had done between 40 and 50 surveys (that we knew of) of an employee population of fewer than 4,000. Almost all of these surveys had been done with SurveyMonkey. Some appeared to have been well done, but most had serious problems, such as biased questions, unbalanced scales, etc.

    I thought this might be an isolated situation, but when I told colleagues about this project, a number said: “Oh, we have that problem, too.”

    So we now have very inexpensive ways to get the kind of data that used to be expensive to get, but, I would estimate that more often than not, that data is flawed. On top of that, very few senior managers understand research well enough to question it.

    Long-term, this could lead to more and more flawed business decisions and a consequent discrediting of using research to support them.

    Do others see this happening?

  9. I am delighted you brought up this issue. It is a very relevant one for our profession of public relators.
    I have been arguing for years now with clients, co-workers and students that they must realize the obstacles that exist today to the formation of what we once called ‘representative samples’.
    Of course consolidated interests (not only of research companies) resist a different approach, but some are working at the cutting edge and trying to introduce correction factors that somehow account for the recent flaws (other recent flaws have to do with mobile surveys, migration flows and rapid mobility of individuals….).

    But there is another point that I wish to address:
    for many reasons that we can all realise, opinions tend to be much more volatile than in the recent past and clients are not so interested in us being able to change opinions of stakeholder groups if these opinions are not correlated to subsequent behaviors.
    This creates many complex consequences on the reliability of survey results.

    The implication could be that we need to focus our ‘listening’ efforts more on observing actual behaviors than opinions, and of course this is much more expensive if we use traditional channels of observing behaviors.
    I wonder if others feel the same way and are thinking about how to approach this other question.

    Online panels maybe could be an answer if panel participants are invited to supply their opinions but also click through actual behaviors.
    But this one can only do in specific circumstances.

    What do you think Frank?

  10. Response rates in probability survey research are clearly a problem. However, it is often the least of the issues facing this discipline. The limitations of survey research include the lack of standards that are applied in designing and executing resesarch that is valid and reliable. Many survey instruments are poorly constructed and the data collection methods are typically far from reliable. Compounding these challenges is the lack of understanding on how to interpret survey data and create insights.

Leave a Reply