The last ten years have produced a large body of research on communication department performance measurement. The European Communication Monitor (ECM) studies together with the Generally Accepted Practices (GAP) studies from USC Annenberg demonstrate an increase in interest in and attempts at measurement by CCOs.  

For a PR/C department, performance measurement can occur at various places: 

  • a.    Communication products, communication messages and communication channels/media (communication inputs; outputs; outtakes);
  • b.    Communication programs/campaigns (audience outcomes);
  • c.    Communication department (process quality; practice effectiveness; efficiency/productivity; internal client satisfaction; value);
  • d.    The organization (business & financial impact; tangibles); and
  • e.    Society (relationships; reputation; brand; social responsibility; intangibles). 

Highly respected and much cited scholar Jim Grunig, Professor Emeritus at University of Maryland, argued that the communication department, if it is to be truly a strategic management function within its organization, should conduct performance monitoring and measuring in all of these categories, with emphasis on the last three. 

In reality, most of the monitoring and measurement that does occur is conducted for (a): communication products, communication messages and communication channels/media (communication inputs; outputs; outtakes). This is the measurement of paid, earned and owned traditional, digital and social media - in the form of clips, counts, impressions, OTS, reach, tone, hits, visitors, views, downloads, followers, likes, re-tweets, message receipt and attention to, message engagement, share of discussion, conversation tracking, etc. This is measuring the effectiveness of ‘getting the message out’ and getting it circulated and talked about as well as the accuracy and quality of the message once in circulation. The measure of the cost-effectiveness of ‘getting the message out’ to a probable receiver, via competing channels, is a related measure, though one rarely applied. 

Numerous measurement experts have noted that this reliance on measuring communication outputs, their distribution through direct and mediated channels/media and their outtake effectiveness, is simply not sufficient. What CEO would consider a report on media monitoring or web site usage or social media involvement as a proxy for the value the communication department brings to the organization? Not one! Anecdotal evidence suggests that most CEOs don’t see this information anyway. Moreover, they don’t want to see these reports, given the low-level operational and tactical nature of these simple tracking and monitoring reports. 

In response, leading industry measurement experts, representing various associations and industry bodies (AMEC; IPR MC; ICCO; PRSA; GA; etc.) developed the Barcelona Principles. Although a number of principles talked about (a) communication products, messages and channels/media (traditional media; social media; AVEs), three of the seven principles focused on measurement of (b) communication programs/campaigns (audience outcomes) and (d) the organization (business & financial impact; tangibles). These three principles were: Importance of Goal Setting and Measurement; Measuring the Effect on Outcomes is Preferred to Measuring Outputs; and The Effect on Business Results Can and Should Be Measured Where Possible. 

While (a), (b) and (d) are covered in these principles, there is no mention of (c) communication department (process quality; practice effectiveness; efficiency/productivity; internal client satisfaction; value) and (e) society (relationships; reputation; brand; social responsibility; intangibles). The Barcelona Principles were an interesting step forward for the PR/C profession, but, in themselves, they did not represent the complete performance measuring needs of a modern communication department.

Subsequent to the development of the Barcelona Principles, the same industry bodies and associations created the Coalition for Public Relations Research Standards. The Coalition’s most important work to date has focused on standards for measuring the effectiveness of (a) communication products, messages and channels/media – with the approach focused on measurement terminologies and methodologies. (As well, do note that cost-effectiveness measurement has not been a part of this standardization process.) 

Interestingly enough, the measurement of products, messages and channels/media appears to be most popular in communication departments as part of an environment scanning or ‘listening’ program. That is, this type of measurement is more an aspect of formative research – to bring forward information, then intelligence and then insight – than for evaluative research. Traditional, digital and social media measurement is being used mostly as input into general organizational, business and communication conceptualization, planning and decision-making rather than utilized for the evaluation of a completed communication program. This ‘listening’ research is not yet as institutionalized as a formal program to the extent that communication planning is formalized (ECM, 2015). Macnamara (2014) points out the need for “architecture” for organizational listening, and that a comprehensive, formal program is better than simply the regular reports of traditional, digital and social media monitoring that communication departments receive. This call echoes the determination in the early 1980s, that in order to have an effective issues management system, the communication department also requires an extensive environmental scanning program. 

Even with these broad-based efforts, anecdotal if not scholarly evidence suggests that PR/C professionals appear to have difficulty with the application and coordination of these five different categories of PR/C performance measurement. Hierarchical models of performance measurement exist that differentiate between and among measurement activities and even though scorecards or dashboards solely for public relations efforts that attempt to integrate various communication measurement categories (for example: Wehrmann, Pagen & van der Sanden, 2012; Zerfass 2005; Vos & Schoemaker, 2004; Fleisher & Mahaffy, 1997) have been developed, but communication departments still struggle to find a workable and comprehensive performance measurement system that includes all five categories of communication performance measurement. 

A number of associations, practitioners/agencies and academics (for example: Grunig and Hunt 1984; Cutlip, Center, and Broom 1994; Lindenmann 1993; Macnamara 2005; Swedish Public Relations Association 2005; German Public Relations Association 2009) have constructed hierarchical communication performance measurement models. These models describe stages or levels, but the stages suggested differ from model to model. The terminology used varies from one model to another. More importantly, not one of the existing models has been independently or externally validated. There is no consensus. In fact, there’s confusion. 

This confusion is manifest in submissions to award programs, where neither planning/objective setting nor evaluation/measurement models seem to have influenced the conduct of the campaign. Recent research by Maureen Schriner, (University of Wisconsin Eau Clare), Rebecca Swenson (University of Minnesota) and Nathan Wilkerson (Marquette University) highlighted the need for a standard model with standard terminology. Their paper, presented at the 2015 Miami IPRRC, Outputs or Outcomes? An assessment of evaluation measurement from award winning public relations campaigns, analyzed PRSA Silver Anvil winners from 2010 to 2014.

An international Task Force comprised of academics and consultants with expertise in measurement, managed by Fraser Likely, was created in the late spring of 2015 under the auspices of the Coalition for Public Relations Research Standards and the leadership and sponsorship of the Institute for Public Relations (the IPR Measurement Commission). The Task Force will explore the multitude of multi-stage performance planning and measurement models available to the profession. In particular, the project will examine the differences in the use of terminology (such as outputs; outtakes; outcomes; outflows; outgrowths; impacts; etc.) and the differences and overlaps of the stages or levels in these models. 

To date, the work of the German Public Relations Society in tandem with the International Controllers Association has been the most promising. They have adopted a broad approach that considers most or all of the five categories first proposed by Professor Jim Grunig. Their approach, called “communication controlling” allows CCOs to demonstrate the overall value of the communication department. “Effective communication controlling allows communication professionals to align their activities with the success of the organization and to work more efficiently within their own area of responsibility” (Position Paper on Communication Controlling, 2011). 

This movement to develop a comprehensive performance measurement framework – built on all five categories – is recognized in the findings of the 2015 European Communication Monitor. Respondents to the survey stated that the measurement methods that increased in use the most in their communication department from 2010 to 2015 were:

  1. Financial costs for projects at 21.1% (c- the communication department); 
  2. Personnel costs for projects at 19.2% (c – the communication department); 
  3. Impact on financial/strategic targets at 13.2% (d- the organization);
  4. Process quality (internal workflow) at 12.6% (c- the communication department); 
  5. Impact on intangible/tangible resources (i.e. economic brand value) at 10.7% (d- the organization and e- society);
  6. Stakeholder attitudes and behaviour change at 4.9% (b- communication programs/campaigns)
  7. Satisfaction of internal clients at 2.0% (c- communication department);
  8. Understanding of key messages at 1.2% (a- communication messages);
  9. Clippings and media response at 0.4% (a- communication products and channels/media); and
  10. Internet/Intranet usage at minus 3.3% (a- communication products, messages and channels/media).

Interestingly, CCOs increased their measurement of (b) communication programs/campaigns (audience outcomes); (c) communication department (process quality; practice effectiveness; efficiency/productivity; internal client satisfaction; value); (d) the organization (business & financial impact; tangibles); and (e) society (relationships; reputation; brand; social responsibility; intangibles) far more than they did for (a) communication products, communication messages and communication channels/media (communication inputs; outputs; outtakes).  

CCOs have determined that a balanced performance measurement framework, one that includes all five categories of measurement, is better for demonstrating the value of the communication department and its contribution to organizational goals.