Latest Evaluation Models – UK Government Evaluation Cycle, EU Guidelines, and More
Advancements continue to be made in measurement and evaluation, a longstanding hot topic and an identified area for improvement in public relations and related practices.While having multiple models and frameworks can lead to confusion over which is best, three new tools offer both theoretical and practical contributions to advance the field.In August, the UK Government Communication Service (GCS) introduced its new GCS Evaluation Cycle [1]. Previously relying on a traditional five-stage, linear program logic model since 2016, the GCS has now adopted a six-stage cycle. This updated model retains essential stages—inputs, outputs, outtakes, outcomes, and impact— while adding “learning and innovation” and presenting the stages in a continuous cycle.This echoes the measurement, evaluation, and learning (MEL) model adopted by the World Health Organization (WHO) in 2022 for tracking its public health communication during the COVID-19 pandemic and for World Health Days [2].The addition of learning as an explicit stage in the evaluation process shifts the focus from a ‘rear view mirror’ reporting approach to generating insights to inform future strategic planning and enabling continuous improvement and innovation.Figure 1. UK Government Communication Service Evaluation Cycle. The second useful recent addition to the measurement, evaluation, and learning armoury for communicators is the latest version of the European Commission’s indicators guide. This retains a traditional five-stage program logic model (albeit with outcomes referred to as “results”) and adds brief descriptions of what each stage potentially involves. Most importantly and usefully, the EC Indicators Guide provides a table under the logic model with lists of typical indicators (quantitative and qualitative) that can demonstrate effectiveness at each stage. The Indicators Guide is available open source online.The EC Indicators Guide is similar to the “taxonomy” of metrics and indicators published by the International Association for Measurement and Evaluation of Communication (AMEC) [3]. Stay tuned because the AMEC taxonomy (a categorized list) of metrics and indicators relevant to various types of communication ranging from media publicity and websites to multi-media campaigns has been substantially updated and expanded and the new version will be published online soon.Figure 2. EU Measurement and Evaluation Indicators Guide [4]. Another practical model addresses one of the most persistent causes of invalid measurement – what PR academics refer to as “substitution error”. In the words of eminent US PR scholar, Jim Grunig, this involves “a metric gathered at one level of analysis to [allegedly] show an outcome at a higher level of analysis”. An example is claiming media reach or impressions as an outcome. This is invalid because reach and impressions are estimates of the potential audience based on media circulation or audience data, but the data do not provide any evidence that people actually saw the content or, even if they did, whether it had any effect.The ‘dissected’ program logic model for public communication published in Public Relations Review in 2023 (see Figure 3) is based on two questions or tests: (1) Who is doing the thing reported (the Doer Test)? And (2) Where is the reported metric occurring (the Site Test)? The ‘dissected’ logic model shows typical communication inputs and activities as well as outputs that are planned, produced, and distributed by organizations. They are things practitioners do and they report what is in media of some kind (e.g. press, websites, social media, etc.). Outtakes and outcomes are measures of what audiences do in terms of reception, reaction, and response, which are quite separate and distinct stages, while impacts are a further stage of flow-on effects in industry, policy, or society. The ‘dissected’ program logic model provides a simple way to check what metrics and indicators are relevant at each stage.Figure 3. The ‘dissected program logic model of typical public communication activities [5]. While researchers need to be careful to not flood the field with ever-changing models, these latest developments have a strong orientation to guiding practice and serving as practical tools for measurement, evaluation, and learning. Jim Macnamara is Distinguished Professor of Public Communication in the School of Communication at the University of Technology Sydney (UTS). He is a widely published author on evaluation and organizational listening including the books Evaluating Public Communication: Exploring New Models, Standards, and Best Practice (Routledge, 2018) and Organizational Listening II: Expanding the Concept Theory, and Practice (Peter Lang, New York, 2024). References:[1] Government Communication Service. (2024). GCS Evaluation Cycle. https://gcs.civilservice.gov.uk/publications/gcs-evaluation-cycle [2] Macnamara, J. (2023). Learnings from three years leading evaluation of WHO communication during COVID-19. Keynote presentation to 2023 AMEC Global Summit, Miami, Florida.[3] Macnamara, J. (2016). A taxonomy of evaluation: Towards standards. Association for Measurement and Evaluation of Communication. https://amecorg.com/amecframework/home/supporting-material/taxonomy[4] European Commission. (2024). Communication, Monitoring, Indicators – Supporting Guide. https://commission.europa.eu/system/files/2019-10/communication_network_indicators_supporting_guide.pdf[5] Macnamara, J. (2023). A call for reconfiguring evaluation models, pedagogy, and practice: Beyond reporting media-centric outputs and fake impact scores. Public Relations Review, 49(2). https://doi.org/10.1016/j.pubrev.2023.102311 ...
Read More...