Recently Google had published a research paper about a new search engine algorithm called SMITH. According to Google, the SMITH algorithm has outperformed the BERT algorithm in understanding long queries and content. IF you are a webmaster then you must need to know about this algorithm, it will have a huge impact on SEO.

What makes this new Google algorithm better than the existing BERT algorithm is its ability to understand passages within documents similar to how the BERT algorithm understands words and sentences. This enables the Google SMITH algorithm to understand longer documents well.

Though it is unsure whether Google has started using the SMITH algorithm or not, as Google rarely says which algorithm they are using at a given time.

Understanding how the Google SMITH algorithm works will help gain an interesting insight into how Google views the future of online content.

What is the SMITH Algorithm?

SMITH is the abbreviation for Siamese Multi-depth Transformer-based Hierarchical. It is the latest search engine algorithm by Google which is focused on understanding long queries and documents.

SMITH tries to understand the full document and is capable of understanding the context of certain passages in the long content or document. Algorithms like the BERT Algorithm focus on understanding words within the sentences.

The Google SMITH model is designed to understand passages as a context of the entire document.

The BERT algorithm is trained on data sets for predicting randomly hidden words within the sentences; at the same time, the SMITH algorithm is trained for predicting what the next block of sentences would be in the context of the whole document.

This helps the SMITH algorithm to understand larger documents better than the BERT algorithm.

Findings of the Research paper published by Google about the new SMITH algorithm

The paper published by Google says that the BERT algorithm has limitations. As per the researchers, the BERT algorithm is limited to understanding only short documents. Therefore they had to come up with a new algorithm that can outperform BERT while working on longer documents

As per the paper, the SMITH algorithm is intriguing because it can do something which the BERT algorithm has been unable to do. The Google SMITH algorithm is not intended to replace BERT; it is created with the purpose of complementing the BERT algorithm by doing things that BERT is unable to do.

Details of Google’s SMITH

The research document published by Google explains that they used a pre-training model which is similar to BERT and many other algorithms. First, let us understand what we mean by an Algorithm Pre-training.

Algorithm Pre-training is an algorithm that is designed to train on a data set. For pre-training of these kinds of algorithms, random words within sentences are masked or hidden by the engineers. The algorithm tries to predict the masked words as a part of the training.

For example, if a sentence is written as, “Twinkle twinkle little ____,” the full trained algorithm will predict, “star” as the missing word.

As the algorithm trains and learns, eventually, it is optimized to make fewer mistakes on the training data.

The pre-training of the algorithm is done to train the machine to be accurate and avoid mistakes.

The research paper stated that “Inspired by the recent success of language model pre-training methods like BERT, SMITH also adopts the “unsupervised pre-training + fine-tuningparadigm for the model training.

The researchers proposed the use of the masked sentence block language modeling task for the Smith model pre-training.  This task would be carried out in addition to the original masked word language modeling task which is used in the BERT algorithm for long text inputs.

Masked Sentence Block Language Modeling Task

Under the Masked sentence block, language modeling task blocks of sentences are hidden during the Pre-training. The researchers explain how the relations between sentence blocks in a document are used for understanding during the pre-training process.

While understanding the content, if the inputted text is long then both the relations between words in a sentence block and relations between sentence blocks in a document are important.

Therefore the researchers masked both randomly selected words and sentence blocks during the Algorithm pre-training.

The Google SMITH algorithm is designed and trained to predict blocks of sentences. The algorithm is focused on learning the relationships between words and then leveling up to learn the context of sentences and how these sentences relate to each other in long content.

The Outcome of the SMITH algorithm Testing

After the pre-training of the SMITH algorithm, the researchers noted that the SMITH algorithm works better with longer text documents. The researchers concluded that the SMITH algorithm is a better option than BERT for long documents.

Is the SMITH algorithm being used?

As mentioned above, Google has not explicitly stated or confirmed that they are using the SMITH algorithm. However, the research papers published believe that the SMITH algorithm goes beyond the state of the art to understand long-form queries and content.