At an Institute for Public Relations Board meeting earlier this month, Trustee Maril MacDonald suggested that IPR might provide guidance to practitioners on how to identify bad research. That could be a mission in itself for IPR. But I decided to start by asking our Research Fellows what they would advise. Here is the wisdom that returned to me just for asking.
Don W. Stacks, Ph.D., Professor of Public Relations, School of Communication, University of Miami: “I’d suggest the following for starters:
- Watch for rounded numbers. Seldom is research as precise as 25, 75, etc.
- If a sample is stated as a ratio and the actual frequencies are not given, I’d be very suspicious of it (i.e., 9 out of 10 or 9:1 when you don’t know the actual frequencies).
- If there is no mention of how reliable the data is (here I’d look for anything beyond simple correlation and suggest that the practitioner at least know the names of several reliability statistics).
- Don’t trust any research that employed a methodology other than experimental if the researcher tries to make causal statements about results.
- If it isn’t well written, then it is probably not well thought out and should be taken with a grain of salt.”
David Michaelson, Ph.D., managing director, Teneo Strategy: “The most important advice I can give about spotting bad research is to assess if the questions are self-serving and biased. This starts with the basic principal of ‘garbage in/garbage out.’ If the questions are not valid or reliable and are designed to bias results, the research is unreliable from the start. Another way to spot bad research is if the supporting documentation is not available. Can you review the questionnaire? Is the unanalyzed data available? Is the research method clearly described? This gets at the core credibility of the work through transparency. Much of this is discussed in my paper that explores nine specific best practices that will assure quality research. It is available from the following link: http://www.prsa.org/Intelligence/PRJournal/Vol1/ .”
Donald K. Wright, Ph.D., Harold Burson Professor and Chair in Public Relations, College of Communication, Boston University: “Methodological approach is a huge problem in both academic and practitioner generated research. Unfortunately, as PR education grows, universities are hiring faculty who do not necessarily understand research methods. Lately I’ve seen research that sounds exciting until you get to the methods section and notice the author(s) surveyed their students and/or conducted interviews with 23 people and then have tried to generalize their results to a larger population such as all PR practitioners in the country. This problem is going to get worse before it gets better because potential research subjects are being bombarded with participation requests and some researchers are struggling to find qualified subjects.”
David M. Dozier, Ph.D., Professor and Coordinator, Public Relations Emphasis, School of Journalism & Media Studies, San Diego State University: “This is especially relevant to survey research. Sample size is important but not as important as representativeness of the sample. How were respondents selected? Often, organizations use convenience samples (sometimes called reliance on available subjects) and then mislabel such samples as ‘random.’ Probability sampling (such as random, stratified random, and systematic sampling) is required to make statistical inferences from samples to populations.”
Frank Ovaitt is President and CEO of the Institute for Public Relations.