A Content Scoring Team Step by Step Process for Scoring Content

Think about the content your team recently published. How would you rate it on a scale from 0 to 100? And how does your rating help your company?

Embarrassed? Consider the method Jared Whitehead devised to score content performance.

Jared works as an analyst for Red Hat’s Marketing Operations Group. After 10 years of growth and acquisitions, B2B tech companies have been in “continuous turmoil” in their approach to content.


Leigh Blaylock, who managed Red Hat’s Global Content Strategy Group and worked with Jared, said the company “had so many acquisitions, so many products, so many marketing teams,” which content makes sense. Yes, no one knew which content to say no.

Last year, Jared, Leigh, and his colleagues set out to manage Red Hat content. They wanted to understand what they had, what they wanted to keep, what they were doing and what they weren’t doing, and what it meant to be “doing.”

Here’s how they did it:

  • Build a content scoring team
  • Standardized content type
  • Audited the content
  • Developed a content scoring method
  • Created a proof of concept

And this is what they continue to do:

  • Find enthusiasts to promote scoring methods
  • Evolve content scoring methods
  • Audit content regularly

Red Hat’s new content scoring method proves its business value by providing content teams with a consistent way to assess the performance of individual content, so everyone says “no” or “yes” to any content. I know what to say .


According to @marciarjohnston, content scoring allows teams to consistently assess the performance of #content. Click to tweet

Leigh and Jared shared this initiative in Content Scoring at Red Hat: Building and Applying Repeatable Performance Model Presentations at Intelligent Content Conferences.

Selected Related Content: How to Document Your Content Marketing Workflow

1. Build a content scoring team

Jared describes two ideas about how to create a content scorecard.

  • Content groups develop scoring methods that others follow.
  • Cross-divisional groups develop scoring methods that work for everyone.

Both approaches work. Choose the one that makes sense for the people and content in your situation. In any case, choose someone who contributes to Canada Mobile Number the performance scoring methodology that has a complete picture of the content and understands the systems used to create, tag, distribute, and manage that content.

Canada Mobile Number

Choose someone to create a performance score that gives you a complete picture 

For Red Hat, this means Jared was involved in the marketing content team. The marketing content team has a complete picture of the company’s marketing assets and content system, from branding to product marketing to corporate marketing. Team members said, “This is our CMS and this is our taxonomy. This is the way to analyze the content. These are the tools we have available. This is what we are looking for. How to use them to get what you have. ”


Having someone who understands the content to be scored and the systems that support it will give you a better understanding of the other skills your team needs. You may want to get help with certain things. For other things, employees may be a natural choice.

Red Hat has hired librarian Anna McHugh to join the team. Jared and Leigh call her the rock star of the project. “She sees all the marketing assets,” says Lee. “She knows what’s available and she does a tremendous job of analyzing those assets.”

Jared adds: “I was able to write a novel about Anna’s role. She became a curator in addition to a librarian. And an analyst. She does everything.”


Selected Related Content: Why Content Marketers Need Digital Librarians

2. Standardize content types

The Red Hat team launched the initiative in 2012 by standardizing content types (white papers, datasheets, infographics, etc.) throughout the marketing organization. We wanted all business units to have a common understanding of each type of content that the company creates.

To accomplish this basic governance task, Red Hat invited representatives from each marketing team to join the core team that developed the standard for the types of content they work on.

If you’re working on content scoring as a cross-departmental team, like Red Hat, you need to standardize content types across departments. On the other hand, for a single content group developing a scoring method, you don’t need to gather representatives from other groups, but you need to standardize the content types within the group.


When working on #content scoring as a cross-functional team, standardize content types. @marciarjohnston Click to tweet

3. Audit the content

Next, the Red Hat team cleaned the house with a content audit. Its resource library (redhat.com’s external content repository) has grown to over 1,700 assets. Leigh, Jared, and Anna didn’t know which ones were out of date or irrelevant, but they knew there was a lot of cleaning. “It was like having a space full of dust,” says Lee. “I don’t want my visitors to have a sinus infection and leave and never come back.”


They needed to figure out how to identify and get approval to remove dusty content assets owned by multiple groups that invested time and money in those assets. They find 419 content assets that are over 18 months old, list them on a shared sheet, identify their owners, and ask them to determine what assets they need to stay available. Did.

The team couldn’t expect content owners to see all of these assets at once, so they looked at 25 assets a week and conducted a rolling audit over the course of several months. Each week, we emailed the content owners of each piece, giving them a week to justify the work they hold in their resource library. Lee explains:


I didn’t want to keep it simple. I wanted to understand why they wanted to leave it there. Was it used in upbringing campaigns or promotions? If so, we can sometimes suggest alternatives.

Ultimately, by removing ROT (redundant and outdated trivial content), we reduced our assets from more than 1,700 to 1,200.

4. Develop a content scoring method

After cleaning up the shop, the Red Hat team turned their attention to analyzing the remaining 1,200 content assets. Jared has created a content scoring method that applies to all content types and content groups.


All marketing groups used the same web analytics platform, so Jared used that tool to learn what was important to them. His findings show the following important indicators for each content type:

  • Blog – Page Time or Percentage of Scrolled Pages
  • Video – Percentage of videos the user has pressed or watched
  • PDF – Downloads

In other words, depending on the group and the type of content, people devise a universal way to score the performance of their content in different ways: “We are winning. We are working.” It was up to Jared. He needed everyone to speak the same language.


That number of common languages ​​had to work for those who loved the humorous side of analysis and those who liked plain English: did this content work? Did it do what we wanted?

Jared has devised a scoring method that gives each content asset an overall score of 0-100. This number is derived from four subscores: Volume, Completion, Trajectory, and Up-to-date . Each subscore is a number between 0 and 100. A weighting factor that describes the relative importance of each subscore for a particular asset.

Selected Related Content: 4 Google Analytics Reports Must Be Used by All Content Marketers


Volume subscore is a relative measure of traffic. “This number is relevant to all other promotional material in the resource library, not specific to a particular content type,” says Jared.

The volume subscore represents consciousness. It is a ranking. This shows how many people have seen a particular asset compared to other asset views on your site.

Example: If a Red Hat web page containing a downloadable white paper receives more traffic than 60% of other Red Hat web pages containing downloadable assets, the web page will score 60 out of 100. Gets the volume subscore for.


The full subscore is the percentage of visitors who downloaded the asset.

Example: If 40 out of 90 visitors download a white paper on a particular page, that’s a 44% download rate. The complete subscore for that page is 44 out of 100.


The orbital subscore reflects the trend.

Example: In the first month, the web page has 900 visitors. Second month, 600 visitors. Third month, 300 visitors. Traffic to that page is declining. In Red Hat, that negative gradient corresponds to a Trajectory subscore of 0.

If the number of visits has increased over the last three months, the Trajectory subscore reflects a positive slope. The higher the gradient, the higher the orbital subscore.

For example, asset visits were 10 times in the first week, 20 times in the second week, and 30 times in the third week. The slope of this asset (rising overrun) is 30 divided by 3. This is equivalent to 10. The breakdown of this calculation is as follows.

30 increase (10 increase in 1st week + 10 increase in 2nd week + 10 increase in 3rd week)

Above (divide by)

3 runs (weeks)

Jared says that the orbital scale is determined according to what you want to collect from the analysis and what is most useful to your organization. The definition of strong gradients varies from company to company. For example, if you consider the gradient of 10 to be strong as above, you can give 100 points on the orbital scale by interpreting this asset as getting an average of 10 additional visitors per week. The scale of the gradient points can be determined arbitrarily (gradients greater than X get Y points). Alternatively, you can evaluate the average gradient of all assets and create a scale based on their distribution.

The Red Hat team understands that outliers can affect the gradient. There is a possibility that the number of views in one month will be zero, the number of views in the second month will be zero, and the number of views in the third month will be two (outliers). The trajectory is up. This is a positive sign, but it does not necessarily indicate stable traffic. Outliers are considered in the up-to-date subscore. This indicates whether the traffic was maintained and stable during the analysis window.


The Recency subscore recognizes the assets that maintain their value. Red Hat sets monthly benchmark goals for each asset. Assets accumulate points based on the number of times they meet the benchmark.

  • 40 points if you meet in the last month
  • 30 points if you met last month
  • 20 points if met 2 months ago
  • 10 points if met 3 months ago

Example: Red Hat sets a benchmark for 50 downloads of assets and evaluates the metric on July 1st. The latest points of an asset are categorized as follows:

  • 0 points in June (31 downloads)
  • 0 points in May (downloaded 49 times)
  • April 20 points (51 downloads)
  • March 10 points (60 downloads)

The July asset up-to-date subscore is 30 out of 100 points (0 + 0 + 20 + 10).

As mentioned in the orbit, the recency score explains the outliers. If the numbers for the first two months are significantly lower (low traffic or low downloads), a positive trajectory (a number that increases each month) can be paired with a low recency subscore. The recency subscore provides a track gut check to show if the slope is a fluke of very volatile traffic, or if the slope is underpinned by strong and stable traffic. “Jared says.

Another example is an asset that gets a lot of traffic in the first, third, and fourth months, and very little traffic in the second month. The slope of that asset may still be positive in the overall calculation. In that case, the decrease in traffic in the second month cannot be known without digging deeper. The Recency score indicates a month with low traffic. “If it’s interesting, you know you’ll be investigating the asset right away,” says Jared.


Each content asset has a different importance for the four subscores and is weighted accordingly. In other words, subscore weighting is unique to each content asset and emphasizes that the team places the highest priority on that asset. Each subscore is assigned a weight percentage, which allows you to take into account priority fluctuations in the standardized overall score. This weighting allows you to more conveniently compare the overall score of one asset to another.

How does weighting work? For certain content, Red Hat says it doesn’t care about traffic (volume subscore). To the visitor who accessed the downloadable asset, “Hey, I want to know about the topic. Download this.” In that case, the complete subscore is more important than the other subscores. Red Hat weights the Complete subscore higher than the other three as follows:


Or, for well-known content, Red Hat may care about how many people have reached the asset (volume) and not the latest. The weighted percentage of that asset is:


Overall score

To calculate the overall score of an asset, Red Hat multiplies each subscore by its weighted percentage. In this example, the overall score for the content asset is 45.


The overall score is neither good nor bad. This only makes sense when compared to the overall score of the rest of the content. Therefore, the score was normalized using a weighted percentage. The overall score allows teams to compare metrics with different types of content that are otherwise difficult to compare.

There is no absolute measure of what to do with the overall score of 45. If the average of the remaining content assets is 32, then 45 is great. This means that this work is going well. However, if your average content asset is 60 and your overall score is 45, your team should investigate why this content isn’t working like any other content.

Lee doesn’t just answer the question, “Did this scoring method work?” The team looked at the subscore and said, “OK, this piece has a great volume count, but the full number is terrible. Our advertising activity in the campaign is that this attacks the right people and a lot Are you focused on getting traffic and no one wants this? Isn’t the landing page effective? What’s happening? ”

A compassionate scoring method can do more than just answer “Did it work?”, Says @leighblaylock. Click to tweet


Like all scoring methods, this has its pitfalls. For one thing, a low score doesn’t necessarily mean that some of the content isn’t working. Scoring is relative. People need training to interpret the overall score. Jared shows the following example. “If you see a score from 0 to 100, 90 or more is A, 80 to 90 is B, 75 is C. Our method is not a letter grade method.”

Another pitfall that Jared witnessed is that people only look at the score. “Numbers are not a strict rule. They are not intended to act as the only data point for deciding which content to discontinue. Scores are not cohesive.”

Scores provide only one way to estimate the performance of individual content assets. Ultimately, people need to consider what’s behind the score and decide which action makes sense.

Selected Related Content: How to Measure Performance and Improve Content Marketing

5. Create a proof of concept for the scoring method

Scoring content is not a rapid process and requires many stakeholders. How do you get people to spend the time needed for this process, setting other tasks aside?

Jared and Leigh suggest starting with a proof of concept and showing what kind of new insights can be gained from the team.

Content scoring is not a rapid process and requires many stakeholders, says @marciarjohnston. Click to tweet

Red Hat will use this experimental period to collect feedback from the people involved in the project. “When we call something a proof of concept, they are willing to provide critical and helpful feedback, rather than glancing at it and saying,’No, it doesn’t work here,’” says Lee.

Red Hat’s proof of concept was built as an analytics sandbox for spreadsheet workbooks containing data from APIs on a web analytics platform. After the data was populated into the workbook, Jared created functions and calculations to summarize the data into an experimental content scoring model. He then shared this data as a CSV file for others to review and comment on.

When stakeholders approved and reviewed the content scoring model, the content team presented the model in a roadshow fashion. They talked to the marketing leadership team and several marketing teams, gathered feedback and encouraged hiring. From there, after more people in marketing understood the model and its potential, Red Hat colleagues began asking Jared, Leigh, and Anna for analysis.

The team designed a proof of concept based on the question, “What methodology gives us what we want?” It took a lot of whiteboard sessions and a lot of math. When they built what they needed (theoretically), Jared “built it updatable and active, allowing others to test and fine-tune the model when exposed to it. “.

The current iteration is not far from its proof-of-concept workbook. Red Hat is in the final stages of marketing, including a daily-updated dashboard utility that everyone can see.

Here’s Jared’s advice for creating a proof of concept:

Let’s start with something. anything. If you have an idea or a general sense of what you are trying to achieve, build what you can do with what you have. Others can understand your process and goals much easier if you look at the prototype and play around with it.

6. Find enthusiasts to promote scoring methods

After proof of concept, once the scoring method is decided, the work has just begun. When you throw your way away there, “it will die within a month. No one will use it,” says Jared.

You must be an evangelist. You must believe that others in your organization can use your method now. I will gladly go to the people and say: “We have something new that you may not be accustomed to yet. We can take you there.”

And you can’t do it alone. Find people in your organization who get excited when they hear about scoring methods. “You can use this. This will help you start a conversation that isn’t currently taking place.”

Find an expert who understands your scoring method and speaks enthusiastically with other content providers and owners. They say, “I’m not saying your whitepaper or video was terrible or great. I’ll tell you what the traffic is,” based on what’s working and what’s not.

It’s hard to argue with a respected person who says, “What should I do with this content because it’s a performance?”

7. Continue to develop scoring methods

We need to evolve the scoring method. Look for opportunities to gain new insights into how people interact with your content. For example, Red Hat has a lot of data about how many people download PDFs, but the data stops there. As Jared explains, this is building a business case for using more HTML content.


您的电子邮箱地址不会被公开。 必填项已用*标注