In the world of SEO and content writing, we often try to find ways to optimize our language and word usage for page rankings. Our SEO analyses tell us how the authority on a given topic is writing, the types of language they use, and important keywords.
Most of this is based on the way that Google’s PageRank algorithm works. However, as of [last week/depending on date of publication] (December 18th, 2019), Google has officially rolled out its BERT update.
If you’re worried that this update will drastically affect your SEO optimization plan, well, stop worrying.
According to Google, this update will make your life easier.
That is Danny Sullivan, Google’s search engine liaison, who tweeted back in October that BERT was nothing to be worried about.
However, much of the community noticed drastic changes after the October update, including major swings in SERPs and traffic. So what is BERT and do we need to take this into consideration moving forward? Before we address this question, it’s important to nail down a few expectations around this article.
First, since BERT is actually more about machine learning than what content strategists might be familiar with, we will not be providing an intro into the current knowledge around machine learning or the history of its development, for that matter, but we will be including a healthy amount of links for those curious.
You can consider this to be an introductory guide for those wanting to get their feet wet about BERT. Since the development of machine learning research is quite interesting, and relevant to our jobs as SEO and content experts, we want to dive into the research around AI and machine learning at a later date.
This article is the first part of a four-part series that will address the following:
- What is BERT in understandable language?
- How BERT affects SEO and content marketing
- The future of BERT and machine learning
- Strategies for adapting alongside BERT’s changes (COMING SOON)
So let’s dive in!
Hello BERT, nice to meet you.
BERT is a natural processing language (NPL) based deep-learning algorithm that is designed to adapt to our language use, rather than the other way around. In essence, the algorithm is learning the dynamics and nuances of 70+ languages in order to better understand a searcher’s intent when using normal language.
Or, as Google puts it: “This technology enables anyone to train their own state-of-the-art question answering system.” BERT uses a variety of machine learning models in order to apply relational contextual cues when searching for our queries.
The three main models are bi-directional reading, POS tagging, and transfer language, but if you want to dive into the science behind it you can start here.
Let’s first break down these three models:
- Bi-directional reading
The act of a machine reading left-to-right and then right-to-left and encoding the word based on the embedded meaning
- POS tagging
A Java-based software that tags fine-grained parts of speech, such as a noun-plural, verb, proposition, etc.
- Transfer language
When a machine learns a task and then what it just learned is reused as a starting point for the subsequent task. This process is referred to as fine-tuning the machine’s knowledge processing capabilities
The model for BERT was fine-tuned to several open-source data sets, including Wikipedia, real and anonymized Q&A Bing search queries, and MS MARCO. It’s inaccurate to think of BERT as simply an update to our SEO vocabulary.
BERT represents the cumulative and ongoing collaboration of multiple deep learning and AI networks. It’s also inaccurate to give sole credit to Google for the knowledge behind BERT’s use of natural language processing, even though the release of BERT (and making it open source) has been a sort of watershed moment for the machine learning community.
BERT is described as the best machine learning model for handling language-based tasks in that it effectively combines natural language processing with transfer language.
Other online groups compete within the natural language processing community, such as with the SQuAD dataset, to improve on the adaptive learning that computational models are capable of. Other “BERTs” have been created, but since BERT is considered to be the most intuitive, it has been applied to a variety of other machine learning language models. Microsoft’s MT-DNN, or Multi-Task Deep Neural Network, is a great example of how open-sourcing BERT has had a positive impact.
In 2015, Microsoft had been developing a multi-task modeling tool that is supervised (told what the objective is) and regularly directed so that the machine can leverage large amounts of cross-task data while also being universalized over a variety of tasks (through multiple domain classification and ranking in web searches).
This type of multi-tasking eliminates redundancy so that the machine can become “smarter” over a variety of tasks. With the introduction of BERT, Microsoft’s deep neural network (DNN) AI was then able to apply BERT’s language coding so that the sentence intention was better understood.
What does this look like
In theory, BERT should learn more about how humans converse through how we are searching online, which could potentially be a problem. If we have adapted our language so that a machine could understand us, how much have we fundamentally changed the way we speak? Accordingly, BERT is really designed for those search queries that are longer, more sentence-like, and the queries that place a lot of emphasis on “to” or “for” and other placeholder propositions. Here’s an example of how BERT is promising to do better for the search community:
Much of what BERT science uncovered was that humans are, most of the time, super vague. According to Stephen Clark, Deepmind scientists who left Cambridge University, “Ambiguity is the greatest bottleneck to computational knowledge acquisition, the killer problem of all-natural language processing.” And we get this.
But, as part of a company that regularly focuses on keywords in online language, it’s also hard to remember how we used to search before we knew that keywords were important.
Many of us are still learning about BERT and waiting to see what it is capable of. Google seems to think that we can stop using so much keyword-y language. However, they know that BERT isn’t perfect and it will still mess up from time to time.
At first glance, it seems that we can expand our search queries to be more relaxed than SEO is tailored for. Since BERT is placing meaning on every word in every direction, it’s still important to be concise and directive in our language.
However, this isn’t something that we can necessarily control. Much of the sway that BERT has caused is a more open market. Instead of strong holding trends based on keywords, others online, including niche businesses, businesses with a smaller budget, and those with a variety of things to offer, might draw a more eclectic crowd.