As measurement is increasingly seen as a requirement for communications professionals rather than a “nice-to-have,” the number of different methodologies and techniques to measure results has proliferated.
Now that the tipping point seems to have been reached and measurement is appearing on almost everyone’s to do list these days, its time to move beyond “any measurement is good measurement” to the stage where CEOs, clients and agencies aren’t just demanding “measurement” but rather demanding that measurement be accurate, that the methodology be transparent, and declare the days of the “black box” dead.
For far too long communications professionals, who might know a great deal about writing and editing and pitching, but know little about research, have been taking numbers and manipulating them to make themselves look good. The vast majority of “scores” “indices” and other ultimate measurement numbers are based on at best old research data, and at worst nothing more than the gut feeling of the person inventing the number.
The problem with these various scores is that people are increasingly basing decisions on them, and that’s like trying to land an airplane when your altimeter and compass haven’t been calibrated.
To address this issue, I am calling on the PR industry and the media that cover the industry to raise and address the issues of accuracy and transparency in media evaluation.
Here are three things that every responsible public relations and communications professional AND the organizations they belong to should be doing:
- Demand a standard set of questions and answers that every customer should know and what every vendor should provide full transparency on.
- Raise the issue of measurement accuracy and transparency every chance they get.
- Talk to their clients about the difference between good research and shoddy research practices.
- Reject from their membership any organization that continues to promote bad research and the “black box” mentality.
I realize I’m being radical here, but if we are serious about the notion that there is science behind the art of public relations, to quote the Institute for Public Relations, we need to demonstrate exactly what that science is.
Here are the prime areas where methodological mistakes and hocus pocus take place:
Data Collection:
- In order to compare apples to apples in research, you need a comparable universe. In media evaluation, this starts with the search string that you use to collect your media. You need to understand the methodology by which media are included and the entire search string used.
- If you are using a select group of publications, a vendor should provide sampling methodology.
Coding:
- Positive and negative are very subjective terms. A good vendor should provide a clear definition of how an article is rated for tonality and what they mean by tonality.
- Visibility, prominence, focal point impact score, whatever you want to call it – the degree to which an article is visible to the reader is used by most vendors as a criteria for success. How that visibility is defined is critical. Vendors should be able to provide a clear methodology for how they select “highly visible” articles.
- Coding methodologies should be transparent and appropriate to the medium and the audience and the channel.
Statistical robustness:
- Multipliers have long been used by shoddy research firms and agencies to “juice” results and make numbers bigger. Unfortunately they have no basis in science. There is no evidence whatsoever that PR is three or five times more credible, believable, or impactful than any other form of marketing.
- Too often we’ve seen very pretty charts and graphs based on two or three articles, moving the data from the realm of science to one of anecdote. Conclusions must be based on a sufficient volume of data.
- Circulation figures should be audited and no multipliers should be used and consistent sources should be used.
- If weighting is used, the rationale for the weighting should be provided.
These are just a few areas of concern. I’m sure there are others and I invite the industry into this discussion.
Katharine D. Paine
President & CEO, KDPaine & Partners
Member, Commission on PR Measurement & Evaluation
I’m impressed! You’ve managed the almost iompssilbe.
Katie – I very much like your thinking in this piece, and agree with the need for transparency. We have virtually given away the formula for Share of Discussion for the very purpose of building the body of knowledge of what seems to work well in linking outputs to outcomes. I think commercial research organizations may have to hold on to some small formulaic aspects that are proprietary – just as all businesses do to be competitive – but they can, and should, be as transparent as possible.
I agree with Katie.
I think the problem of weaker vs. stronger research is multifaceted. Katie mentions old models and methods for collecting and analyzing data. Certainly these are issues.
Another, which Katie touches on, is PR practitioners wanting themselves always to look good. That is, every PR program should look like it was a great success, if they did it.
However, one of the most important contributions evaluation makes to the practice of PR is enabling it to improve. That is, when you learn a PR tactic didn’t work as well as expected, the next questions should be why and what can I do to make it work better next time.
When practitioners have good managment and good relationships with that management, they are free to say “this didn’t work as well as we’d hoped. The evaluation data suggest this is why, and this is what we’re going to do to make it work the way we want it to.” And they followup by evaluating what they do next and reporting those results.
If PR, like other business disciplines, is going to become more effective, practitioners must be able to learn from their mistakes. With poor or manipulated data and analysis practitioners don’t even know whether what they did worked or not.
As a profession, we need to be willing to experiment and learn from our mistakes. The better we evaluate the success of our programs, the better we will be able to learn.