Camille

Level Up Your Marketing Campaign

 

With the unveiling of Google’s BERT, its updated search algorithm, content marketers all over the world have been trying to navigating the shockwaves.

BERT is a type of natural language machine (NLM) processing algorithm designed by Google to more effectively read our search queries. Since the BERT update, content marketers have been more in tune with the way that BERT can intuitively (and sometimes not intuitively) read our words.

In order to avoid sinking like a stone in the SERPs, let’s see how this new algorithm affects SEO and content marketing and how you can make it work for you.

This article will first review:

  • How BERT is improving our search experience
  • How BERT impacts informational queries, and
  • actionable steps to adapt your content strategy to BERT’s changes

Content, Context, or Relational Predictability

As mentioned in earlier pieces, BERT is the product of years of testing and machine learning research. It’s main goal now is to read our search queries in a way that more closely aligns with how humans might interact with each other. This means that we can a) loosen up the conciseness of our search queries and that b) BERT might be able to recognize previously illusory (to machine learning) things about human culture.

As the header suggests, BERT is reliant on three things: the content itself, the context of the content, and what the user means by using the content against the variety of ways it can be used.

Content

Recognizing that the BERT algorithm needs to read our content is a no brainer. Recognizing the importance of content in marketing is, in fact, not new. The concept has been around for ages. However, with the emergence and proliferation of internet connectedness, content marketing enables businesses, marketers, and users a more direct line of contact. 

Because of a few issues with human communication (primarily our vagueness), much of the task for BERT is language understanding. Natural language processing has allowed for this to happen. 

And because BERT is open-source, Google (and a plethora of researchers) are also able to use the structured data from BERT to track, log, and categorize all this communication for future reference.

Context

This is where context comes into play and it was one of the biggest hurdles that Fred had experienced (for more about Fred and its predecessors, see here). Until January 2020, when BERT was unleashed, we were very familiar with this issue regarding context. Fred, and other natural language algorithms, were continually storing keywords and data in order to facilitate our searches. However, some of them were drastically wrong. 

Why?

Primarily, context issues caused by unidirectionality. Prior to the recent BERT update, natural language models would use left-to-right reading to train for understanding language. So the processor would read a single sentence (or query) in the typical Western form (left to right). However, once the processor moved past one sentence, it could not go back and reflect back on the previous sentence. 

This made inference or intentional language very difficult. Every search query had to be written in very specific terms. Hence, keyword searches. Because of unidirectional algorithms, we have become accustomed to searching in fragments.

Relational Predictability

Here is where the BERT algorithm update comes in. The researchers who modeled BERT were able to do so using a bi-directional language model to train the algorithm. This model is called the “Masked language model”. 

The “masked language model” (or MLM) is actually an ancient model in terms of the history of computing (1953). MLM is a way to train computer languages and it does so by masking random tokens in order to assist in vocabulary prediction. 

Researchers then paired MLM modeling with something they call “next sentence prediction.” While intuitive to us, next sentence prediction was not something that previous AIs were comfortable with. In fact, previous AIs were only looking at one sentence at a time. If the sentence was too long, the command would not convey.

By training the model to predict and back-track (i.e., fine-tuning), scientists were effectively able to train the model to predict and relate, two things that might not come naturally to an alien learning human language.

The Three Types of Search Intents

When discussing BERT, it’s important to recognize that each query is categorized based on search intent. This means that content can be organized for both users and machines in order for the queries to pull up accurate results related to the search.

Search intents not only register what it is that a searcher is trying to do but it also more adequately directs the users. For example, if you’re familiar with a certain brand, you can probably pull up that brand rather easily. So, finding the brand initially might not be a problem. But instead, you may know exactly the type of content you’re looking for and need to make the search in order to speed up that process. 

We can organize search intents in three categories: informational, navigational, and transactional.

  • Informational: Informative content is okwrite’s bread-and-butter. This is the type of stuff that you go looking for when you are trying to get your bearings, learn a new skill or industry, or when your curiosity strikes. Informative content will be large and over-arching. An informational query might pull how-tos or what-is pieces.
  • Navigational: Navigational search queries help users get from one known point to another. The other point is known in a lot of ways, except for one—the actual URL. Navigational relies most on SEO because these search queries need to pick up on priority keywords. If I search for okwrite blog and our company’s blog doesn’t pull up, then the site-level SEO is configured wrong.
  • Transactional: Similar to navigational queries, a transactional search query is dedicated to making a purchase. Usually, this will allow a set of sub-links to pull up in the search results under the website you’re looking for. By searching a brand in conjunction with a product name, sales, or the specific item, service, or product you’re looking for, then you can bypass the process of loading landers or tricky pop-ups.

Applying BERT to Your SEO Strategy

BERT is most receptive (and malleable) towards informative searches. This is most likely because informative searches can have the most lee-way in terms of the “correct” answer. So these suggestions primarily relate to SEO search strategies for informative pieces.

Let’s look at some content-related page-level factors for considering BERT:

  • Keyword in Title Tag: SEO best practices suggest that the primary search keyword should be in the title tag, and it should match your Title exactly. This will only work if the content associated with that page does exactly what the title (tag) says it will.
  • Title Tag Starts with Keyword: Just having the keyword start the title tag will not save a page that does not cover the content in-depth nor accurately. This does not apply if you are trying to force rank. This only applies if, again, the title makes sense for the article. Otherwise, BERTs contextual reading of the page based on the content and your headers will weed out your article as spam.
  • Keyword in Description Tag: Your description can calm down a bit. If your article is solving a problem, there is no need to leave the reader hanging. Answer the problem in the description itself. If a reader is more likely to know that they will find their answer on your site then they are more likely to click into it to read more context and information. While many informative searches can be answered in one sentence, this often does not convey the full context of the question and its variable answers. Users will click through to find that information.
  • Keyword in H1 Tag: Put relevant keywords in H1 tags to guide users. This should not be the use of a repetitive keyword as that will lead BERT to question the article’s authenticity. Is the use of that keyword warranted and relevant? Should another keyword be used in its place? These are things to consider before automatically shoving that keyword into H1.
  • TF-IDF: Another very important factor. TF-IDF refers to the keyword frequency compared to the length of the article. TF-IDF ranking will differ greatly depending on the purpose of the content. If your content is informative copy, then you might ignore TF-IDF. Or, strategize it relative to what’s appropriate. However, if this is informative blog content, then you’ll want to stack clear and related words close to the keyword so that the main goals of the keyword is understandable.
  • Table of Contents (ToC): While shorter articles might need a table of contents, a linkable table of contents will help readers navigate your page. It also provides links that others can use for their backlinking strategy. ToCs are not meant to be grandiose but useful, and BERT will appreciate this.
  • Keyword Density: If you are stuffing keywords, then you are most likely not providing readers with enough of the information that they need. Keyword stuffing is a weak tactic that should be abolished. This is different from keyword density. Keyword density should reflect an appropriate amount of keywords for the topic and the article’s goals. Adjust your keyword research to reflect this.
  • Latent Semantic Indexing (LSI) Keywords in Content: While maybe not as important, LSI will help to set your article apart from the rest. LSI refers to keywords that have more than one meaning. So, being clear about this will define how your page ranks. By paying attention to keyword meanings, you are also showing BERT that you have put more thought into this aspect compared to others who have not provided the relevant context.
  • LSI Keywords in Title and Description: Clarifying a potential double-meaning term in your title and description will help to lower bounce rates and therefore it will boost your page where it is supposed to be, rather than hurting your page on all topical searches. 
  • In-Depth Coverage: BERT values more in-depth coverage than superficial coverage. It’s just that simple.

Adapting to BERT

Overall, marketing adaptations for BERT should not be difficult. In fact, if you’ve done everything right, then you will see your pages balance out, and more users will be finding a more relevant result. So while you might see a dip in traffic, know that the traffic that you lost was not actually relevant to your content anyway. 

If you see an increase in your bounce rate, then you need to assess this and make changes. An increase in your bounce rate following BERT will most likely suggest that your content is misleading (if it is page-related only). 

In order to adapt to BERT, focus your content marketing strategy on:

  1. Providing great content that solves, addresses, and answers specific questions/queries
  2. Being concise when providing definitions
  3. Stop focusing on length as there is no right or wrong answer
  4. Provide keywords that are relevant… and relevant only
  5. And of course, produce quality content

A lot of the adjustments with these algorithm changes are not only about the end-users adjusting to BERT, but also BERT learning more about how we interact with Google’s search engine. Staying consistent and providing actual and clear content will allow BERT to recognize good content over misleading or confusing content.

Author

Related Posts

Categories

Share This Story, Choose Your Platform!