Many PR people take a pass at the entire concept of measurement for a simple reason. Blind, rank fear. They don’t know how, or they fear their research won’t hold up to leadership scrutiny.

We hear “measurement” and immediately are assailed by unhappy memories – math homework, the tyranny of columns of numbers, complicated algebra. Heavens, it’s why a fair number of us wound up in PR – the warm embrace of words, the freedom of expression unbound by arithmetic rules, the lack of one, singular, “correct” answer.

But what if measurement could be notional? What if we used research to offer a quick check of our assumptions, rather than a platform for predicting?

Good communications people spend a lot of time asking questions and listening carefully. Oftentimes, we are among the only people in an organization with a good understanding of the big picture. We call it “seeing the story.”

For example, it shouldn’t take a double-blind experiment to learn whether employees know how their contributions affect the bottom line. We certainly can do a formal study, but to what end? We can ask a simple poll question and get a pretty good snapshot of where employees believe they are on that question of line of sight.

We then can use that information to guide our editorial strategy on that topic.

Will a formal, scientific, quantitative analysis quiet the critics in our organizations who claim we’re not proving our worth? Probably not, unless and until we discover the magic bullet of measurement that accounts for nonscientific impact on business results.

Perhaps it is enough to use less formal methods. Pick 100 names at random from your organizational phone book, and send them a five question mini-survey. Convene a group of 10 people and discuss their perceptions to help uncover areas for potential improvement.

At Goodyear, we ran a series of “card sorting” exercises, where 10-12 people gather around a table covered with 3-by-5 cards bearing the Intranet’s link titles; they reorganize the groups of content elements by discussion. Then they name the groups.

We attempted to choose a variety of ages, positions, departments, etc., but many were people we either knew of or who were recommended to us. Were there sufficient numbers of people to be statistically significant? Not likely – we did 12 groups in Akron, one in the U.K. and one in Germany.

But I submit it really wasn’t necessary to go broader and deeper with that exercise, especially with budgets where they are these days. At least in terms of Web behavior, it’s unlikely we would have gotten significantly different opinions from those we did.

The point is, though, we did get opinions that differed significantly from those we brought to the exercise. Our intranet is better for the experience.

We do a daily poll question on the intranet, and it’s one of the most popular features on the site. About 800 people vote on the poll each day (they are prevented from voting more than once), and some 450 per day just view the results without voting.

We ask some tough questions regarding motivation, perspective on strategy and execution, etc. The results aren’t scientific, and though we repeat questions a few times per year, can’t be considered comparable. That doesn’t stop them from being a topic of discussion.

If the goal of internal communication is to foster dialogue and discussion about the business, the poll does not fail that test. Nor does its lack of quantitative methodology make it less valuable to the organization.

Of course, we’d all prefer to have the resources to do scientific research, but if we do not – either because of skill or money – why not get a notion?

Sean Williams
Manager, Editorial Services, Goodyear Tire & Rubber
Member, Commission on PR Measurement & Evaluation

Heidy Modarelli handles Growth & Marketing for IPR. She has previously written for Entrepreneur, TechCrunch, The Next Web, and VentureBeat.
Follow on Twitter

4 thoughts on “Sean Williams: Does Internal Communication Measurement Have to be Quantitative?

  1. Interesting, Sean. You’re showing that meaningful evaluation can be done quickly.  While the methodology won’t get you into a research journal, does it matter? The information you gather each days enables you to improve the results of your programs and the process itself creates a dialog with employees—a dialog not many Fortune 500 companies bother with.

  2. Thanks Angie and Tom—the goal here is to generate some discussion around this topic, and we’ve got a little start here. Appreciate your kind thoughts.

  3. Sean: Good ideas, well presented. The issue with evaluation is to “just do it”. The data that you uncovered can form time-on-time comparisons and helps (a) improve the current activities and (b) aids the case for research funding later on. It’s the same with media evaluation. There are simple models that everyone can use to survey what’s being written, where it’s appearing, who’s saying it and what’s the tone. It may not have research purity but it is quick, cheap and gives insight. TOM

  4. Sean – this is an excellent article.  You’ve outlined some great ideas for testing the waters using techniques anyone could implement.  I always encourage people to do informal focus groups, or do a little online survey, not because the results are projectable, but because the information helps refine our thinking.  Sometimes it leads to further scientific research, and sometimes the results are enough for our immediate purposes.  Well done!  Angie Jeffrey

Leave a Reply