2018 Holiday Gift Guide Pitch Kit — Make Sure Your Brand is Featured This Season Download Kit

Breaking Down the Barcelona Principles

This post is written by Ann Feeney, Information Retrieval Specialist at Cision.

Next week we will be kicking off AMEC Measurement Week in New York City, an event that is based around the Barcelona Principles. Officially the Barcelona Declaration of Measurement Principles, these are a set of standards for measuring the impact of media on public relations. Most major agencies and a growing number of companies have declared that they will adhere to these principles, including Southwest Airlines, General Motors, General Electric, McDonald’s, Edelman, Ketchum and Weber Shandwick.  They were drafted by the Association for Measurement and Evaluation of Communication (AMEC), one of the foremost research groups in the public relations sector.

The public relations research community developed the standards in response to a growing demand for meaningful metrics, a fast-growing trend in all sectors, especially ones like human services and public relations where impact has been difficult to measure. There were—and still are—numerous flawed models for measurement, including advertising value equivalency (AVE), which estimates media value on how much it would cost to buy an advertisement of equivalent size and circulation. That model assumes that editorial coverage is truly equivalent to advertisements; even if it were, placement costs are so fluid that a model using placement costs is not likely to reflect the true dollar value.

The principles do not mandate or even recommend any particular method, although they do criticize AVEs and provide some general guidance on other methods, such as surveys and statistical analysis. This allows for considerable flexibility in applying the principles, especially as there is no “one size fits all” method.

The seven principles follow, along with some context from the overall research discipline:

1)      Importance of Goal Setting and Measurement

  • This is the simplest principle, declaring the value of setting goals and measuring how effectively various projects are getting you towards those goals.
  • Like most fields that used to rely on gut instinct and experience, PR is becoming more disciplined and scientific. However, measurement by itself isn’t enough for success, and measuring the wrong thing or the wrong way is a recipe for disaster.

2)      Measuring the Effect on Outcomes is Preferred to Measuring Outputs

  • It’s tempting to measure outputs, such as number of press releases distributed, number of contacts made, or even number of stories. It’s easy and quantitative, so it lends itself to charts and presentations. However, outputs only measure what you did, not whether it was worth doing.
  • Outputs are great when there’s a direct and defined link between the output and the outcomes. For example, since we know that polio vaccinations prevent polio, the number of vaccinations measures how many people won’t get polio. But public relations doesn’t have the same direct link. We can’t yet get into the public’s collective head and see what’s going on there between the moments somebody is exposed to a message and an opinion is shaped, reinforced, revised, or discarded. So that’s why we need to measure the outcomes.
  • Outcomes can be either beliefs, such as a positive perception of a person or organization, or actions, such as purchasing, advocating, contributing, or voting. However, the closer a belief is to an action, the better.

3)      The Effect on Business Results Can and Should Be Measured Where Possible

  • Tying outcomes to business results tells you what to do more of, what to do less of, or what to stop doing. It also defines the value of public relations efforts.

4)      Media Measurement Requires Quantity and Quality      

  • Quality refers to the completeness of the data as well as the accuracy. Does it include metadata about the source, such as its audience and credibility to its readers? Does it provide metadata about the article or posting, such as tone and the prominence of the topic in an article (such as passing mentions versus substantial coverage), and where possible, semantic analysis?
  • Quantity has to be sufficient to support the decisions that you’re going to make. For example, do you have enough data to generalize about a specific group, such as job-seeking milllennials, wealthy single parents, or chief technology officers?

5)      AVEs Are Not the Value of Public Relations

  • Aside from the reasons we discussed above, if you have enough data to truly measure the tone of the article or posting enough to estimate value, you have enough data to use more meaningful metrics.
  • Placement time and location are difficult to capture for social media, so much so that you’re better off spending the efforts on metrics that are tied to business value.

6)      Social Media Can and Should be Measured

  • We know tremendous amounts about how people consume social media; in many ways, thanks to automated data gathering, we know more about this than about traditional media. We also know that it can be tremendously influential.
  • That said, we’re still in the beginning stages of understanding how social media influences beliefs and behaviors. For example, look at all of the studies about the impact of social media on the Arab Spring. Virtually every study draws a different conclusion from the same data.

7)      Transparency and Replicability are Paramount to Sound Measurement

  • This is fundamental to any kind of research, whether particle physics, animal behavior, or optimizing packaging speed.
  • When analyzing a collection of articles or postings, the analyst should make the dataset available to the client.
  • While some measurement tools are proprietary—so not transparent by definition—they should be replicable. That is, you run the same data through them and get very close to the same result each time. Some results are more replicable than others, in the hard sciences as well as in social sciences, but discrepancies should have an explanation. For example, humans bring a huge set of assumptions and perceptions to tasks such as assigning tone, whether they’re doing it manually or through a programmed algorithm. Two equally competent humans may assign different tones to the same material. However, the automated results should be consistent and the humans should be able to explain why they assigned a particular tone. 

Photo credit: @geishaboy500 via flickr

About Cision Contributor

This post was written by a guest Cision contributor.

Recent Posts

Cision Blogs

  • Communications Best Practices

    Get the latest updates on PR, communications and marketing best practices.

  • Cision Product News

    Keep up with everything Cision. Check here for the most current product news.

  • Executive Insights

    Thought leadership and communications strategy for the C-suite written by the C-suite.

  • Media Blog

    A blog for and about the media featuring trends, tips, tools, media moves and more.