6 Common Media Measurement Metrics
January 01, 1970 by Leela Bozonelis
Implementing a media measurement program can be an intimidating task for many PR professionals. After all, many people in our industry undertook PR as a career due to a love of storytelling or being skilled at managing crises, not because they were eager to examine data sets or learn about array of seemingly esoteric metrics.
However, we've found that if PR folks are able to overcome that intimidation factor, they'll see that the most common media measurement metrics actually are easy to understand, even for the math-averse. The biggest lesson we want to impart is that each individual type of media measurement metric represents a systematic way to answer a question about an organization's media coverage. And, once you understand the underlying question being answered by the metric, you're more than halfway to understanding the metric itself.
With that in mind, here is a brief primer on six widely-used media measurement metrics and the key question they're trying to answer:
1. Volume of attention
This metric answers the basic question of how many stories appeared discussing whatever it is you're tracking, be it a company or an organization; a brand; a product or a service; or a campaign or initiative. Now, we're using the term "stories" to mean the unit of content for the media type in question. So, for Twitter, it'd be the number of tweets, while in print media, it'd be the number of articles.
Monitoring and analytics tools like LexisNexis Newsdesk® or LexisNexis® Social Analytics are best suited to track this metric. Though in certain circumstances, such as when there's a high degree of nuance with the coverage that software cannot grasp, a level of human intervention is needed on top of a monitoring platform to determine the overall volume. However, with machine learning being integrated into monitoring platforms, in the future, human intervention largely may be more of a step at the beginning of a project than an ongoing aspect of it.
2. Audience reach
Audience reach answers the question of how many people had the opportunity to consume (i.e. read, view, and/or hear) coverage discussing whatever it is you're tracking. This metric is based on the known circulation, viewership, audience size, and followers of the media outlets or social media users publishing the content in question. For each story/hit, the audience size of the publication/social media user providing the content is identified and then that figure is summed up with the audience size of all other outlets publishing content, giving the total audience reach. For example, if Newspaper X, with a print circulation of 20,000, published a story on you and Tweeter Y, with 10,000 followers, tweeted about you are your audience reach would be 30,000.
There are a couple of things to keep in mind, however, with audience reach. The first is regarding viewership figures for traditional online news content. Some in the measurement industry use a website's Unique Visitors per Month for audience reach calculations, while others use a website's daily visitor figures. There's no definitive measurement industry consensus on which approach is the correct one, but we at LexisNexis believe it is most sound to use a daily viewership figure since that's more likely to represent the size of the audience that realistically had an opportunity to consume the content.
The second thing to keep in mind is that some PR or media measurement firms may use multipliers when calculating audience reach to account for when print editions of newspapers, magazines, or journals are shared or viewed communally, such as when a magazine is on display at a doctor's office and dozens of people read that one copy. Given the slow death print media is experiencing, the matter of multipliers is becoming less and less relevant. But, nonetheless, it's important to know if they're being used when determining your audience reach.
3. Leading topics
Leading topics answer the question of what was discussed most often in the coverage. While overall volume of attention determines how many stories appeared on the macro level, that alone doesn't inform you what was discussed on a granular level across all that coverage. Tracking topics is the method to uncovering this information. A topic could be a big picture matter, such as financial performance or corporate strategy, or a specific, narrow item, such as an individual product or event, or something in between. Topics can be tracked in an automated fashion through monitoring tools, such as with key word searches. Or, if you plan to work with a measurement company that offers human-based analysis or plan to analyze coverage yourself, you can track topics manually.
Both the automated and manual approaches to topic tracking have benefits and drawbacks. With automated approaches, the main benefits are always going to be speed and scalability (which generally means lower costs). For manual topic tracking, the main benefit is that humans will understand the content and be able to categorize it appropriately regardless of the wording (i.e., you don't have to tell a human being that a story discussing a company filing for bankruptcy is a discussion of its financial performance even if the words "financial performance" do not appear within a story). As such, manual coding typically is more accurate for topic tracking, particularly with more complicated or nuanced subject matters. However, if you're looking for a certain product and its name is distinct, automated approaches are going to be more accurate than manual tracking.
Furthermore, sometimes word clouds are used as a proxy for tracking topics. While word clouds are effective at measuring what specific words or phrases appeared most often within coverage, they don't quite get to the level of precision we like in terms of identifying what actually was discussed because word clouds don't bucket together themes or related phrases that express the same idea. Nonetheless, word clouds are useful because they do not require you to identify ahead of time the topics that you want to track. Word clouds surface for you without any effort on your part those words/phrases that are most common, and from there you're able to deduce or, through investigation of the results, discover what topics were discussed most often.
One last piece of advice with topic tracking is that we at LexisNexis have found it most effective to examine the sentiment of coverage for each leading topic. It's vital to know not just what was discussed most often, but also how positive or negative coverage was on these leading topics.
4. Leading messages
Leading message tracking answers the question of what were the most common ways the entity in question was praised or criticized across the compiled media coverage. As with topics, messages can be construed broadly to capture a major theme or narrowly to capture a specific thought about an individual product or service.
Much of – if not the majority of – messaging delivered in traditional media appears implicitly, while implicit messaging is still pretty common in social media as well. What I mean by appearing implicitly is that the context of the discussion in the story conveys the message, rather than having the message being stated outright. For example, think of a wire service story on a company announcing record profits. The story itself likely will not contain any sentences saying a phrase like the company "is performing well financially", but the details about its profitability, even if they're delivered in the classic, facts only fashion for which a wire service like the Associated Press is famous, clearly convey that the company is performing well financially. Or, think of a social media post where a person complains about customer service from her cable/internet company. She might write about how she hates waiting on hold or how they're two hours late for their appointment. These types of complaints convey negative messaging about the company's customer service often without ever mentioning the words "customer service".
Consequently, I believe automated analytics software is not the right fit for message tracking. Instead, this is a task best suited for human analysis whereby a trained analyst examines individual stories to determine which ways the entity in question was praised or criticized given the context of the story.
5. Overall sentiment
On the face of it, sentiment seems straightforward since it answers the question of whether a story is positive or negative toward the entity in question. But we believe it's an extremely complex and nuanced matter, and it's something that's hard to measure accurately. I believe sentiment is how positively or negatively a story depicted the entity in question given the context of the story. This is a broad definition that accounts both for aspects of a story that were explicitly favorable or unfavorable and for elements of a piece that were implicitly positive and negative.
Since software cannot know or understand the context of a story, automated sentiment analysis systems effectively define sentiment slightly differently, though most don't come out and explain this. In my estimation, the question that automated sentiment systems answer is "what is the common connotation of the words in the article and/or in proximity to the entity in question?”. While some systems can use different dictionaries for different clients or industries (i.e. a word like "sick" might be a positive term in one tool's dictionary for video game clients, while it's a negative word in that tool's standard dictionary for say, food, or other clients/industries), the systems still typically rely simply on the connotations of words to determine sentiment and do not account for context. More advanced systems will use natural language processing to determine which entity in the story the words with positive or negative connotations are describing, but they're still looking only at the words expressed in the story without understanding their context or any of the subtext.
Given this, we believe that the best approach for accurately measuring sentiment is through supplementing automated analysis with human analysis. Automated analysis has its place – particularly when you need a quick and affordable way to get a decent understanding of how coverage is faring, and to do so on a large scale. It's also clear that, with the growing integration of machine learning into media measurement software, the accuracy of automated sentiment will only improve over time.
Engagement metrics answer the question of how much are people interacting with content. Engagement typically is measured for social media, though, it could be measured for traditional media attention as well (but it's far more difficult to do than it is for social media). Engagement usually is tracked in terms of likes, shares, and comments. Lastly, on social media, engagement generally is measured only for one's owned channels and content. While these six metrics answer key questions that any PR/Communications person would want to know about his or her brand, it's important to understand that, individually, they do not answer the ultimate question of whether an organization's traditional and social media attention is moving the needle and affecting tangible, bottom-line results. For that, further analysis is needed . . . and that's where things start getting a little less straightforward. But we'll save that subject for a future blog post.