In a previous blog I wrote about the lack of good, basic secondary research being conducted and made available. This is often due to the proprietary nature of business—why let a possible competitor know how you created a measure, strategy, or campaign? Why do the competitor’s work? Secondary research informs the practice and helps the profession establish standards against which to compare outcomes. Further, by providing the basic reports from which a measure or strategy has been developed, it adds to our body of knowledge. Finally, research ethics call for the disclosure of data and analyses so that others may better understand how it was evaluated.
Every now and then an item comes across my computer that piques my interest from an approach based on an understanding of theory research, and practice. These items are typically “white papers” or “research papers” that in many ways are teasers for a product or serve to provide a competitive advantage for evaluating an outcome or setting baselines against which success is based.
In this regard, I looked at what FleishmanHillard calls the “Authenticity Gap” and how it can be measured, analyzed, and evaluated. The underlying theory is that authentic engagement can be measured by looking at an organization’s branding and its stakeholder’s perceptions of organizational reputation. The gap between the two is what is being measured as influenced by nine variables separated into three sets labeled “strands”—management behaviors (credible communications, consistent performance, and commitment to doing the right thing), customer benefits (innovative products and services, customer care, and providing better value), and society outcomes (community impact, customer care, and environmental care).
The idea of an authenticity gap is intriguing in and of itself, especially considering the Arthur W. Page’s vision of corporate communications now as the “Authentic Enterprise.” But what impressed me most was that the measure was created through a large-sampled set of research studies in three different countries. Additionally, the report offered access to the methodology in terms of full reports. This kind of openness can lead to better understanding of methods, statistics, findings, evaluation, and claims. As you might expect I wanted to know about the measure’s psychometric properties, how the data were gathered, how it was analyzed and the report’s evaluation of the Authenticity Gap across the three countries. The availability of these reports provided the basic theory, method, and data that were employed to create the measure and to better understand how it might be used in your project as baselines or benchmarks.
As I noted earlier, one of the problems with public relations research is the proprietary nature of the methods, measurement instruments, data, and evaluation. This is part of the “competitive advantage” that a company or agency might hold in the business market; but it does not help advance the profession’s body of knowledge. From an academic’s perspective such transparency and disclosure adds to our understanding of public relations. In this case it allows the researcher or client to truly evaluate the concept being measured and how valid and reliable those measures might be. I applaud FleishmanHillard for being open regarding the creation and validation of a concept that based on the background reports is both a valid and reliable measure of the gap between brand experience and expectation.