Google Panda
First put into place in February 2011, Panda was focused on quality and user experience. It was designed to eliminate black hat SEO tactics and web spam.Google Penguin
Google Penguin update was launched in Apr 2012 to better catch sites deemed to be spamming its search in particular those doing so by buying link or obtaining them through link network designed primarily to boost google ranking.Hummingbird
Unveiled in August 2013, Hummingbird made the search engine’s core algorithm faster and more precise in anticipation of the growth of mobile search.Cache
When search engine bot or crawler come to your page or found any update on site.Indexing
After reading the updated element it stores those new elements in Search Engines database.Crawler
It is a program that visits web sites and reads their pages and other information in order to create entries for a search index.Rankbrain
Rolled out in spring 2015, this update was announced in October of that year. Integrating artificial intelligence (AI) into all queries, RankBrain uses machine learning to provide better answers to ambiguous queries.In short, RankBrain tweaks the algorithm on its own.
Depending on the keyword, RankBrain will increase or decrease the importance of backlinks, content freshness, content length, domain authority etc.
Then, it looks at how Google searchers interact with the new search results. If users like the new algorithm better, it stays. If not, RankBrain rolls back the old algorithm.
RankBrain has two main jobs
- Understanding search queries (keywords)
- Measuring how people interact with the results (user satisfaction)
Fred
It is an algo update that targets blackhat tactics tied to aggressive monetization.Monetization: Sending existing website traffic to a particular website.
It includes an overload on ads, low value content & little added user benefits. Sites hit by fred were dummy sites created for ad revenue, but the majority of website affected were content site that have a large amount of ads and seems to be created for the purpose of generating revenue over solving user problem.
Possum
It is filtering your business out if it has a duplicate, similar or second listing.
It effected the below:
- Business outsite the city limit.
- Seperate business location at same address as a similar business.
- Two or more business owned by the same company.
Owl
It target fake news like content which google call "offensive or clearly misleading content".Google’s Mobile-First Index
Google’s Mobile-first Index ranks the search results based only on the mobile-version of the page. And yes, this occurs even if you’re searching from a desktop.
Before this update, Google ran two indexes side-by-side: a mobile version and desktop version.
Link spam update
It is even more effective at identifying and nullifying link spam more broadly, across multiple languages. Sites taking part in link spam will see changes in Search as those links will be re-assessed by the algorithms.Nullifying link spam. You can see the word Google used here was “nullifying,” which does not necessarily mean “penalize,” but instead, to ignore or simply not count. Google’s efforts around link spam have been to ignore and not count spammy links since Penguin 4.0 was released in 2016.
Might feel like a penalty. While Google may not penalize your site for these spammy links, if Google ignores or nullifies links that may have been helping a site rank well in Google Search, that might feel like a penalty. In short, if you see your rankings drop and if it is a sharper drop, it might be related to this update.
Fetch
Fetches a specified URL in your site and displays the HTTP response. Does not request or run any associated resources (such as images or scripts) on the page. This is a relatively quick operation that you can use to check or debug suspected network connectivity or security issues with your site, and see the success or failure of the request.
Fetch and render
Fetches a specified URL in your site, displays the HTTP response and also renders the page according to a specified platform (desktop or smartphone). This operation requests and runs all resources on the page (such as images and scripts). Use this to detect visual differences between how Googlebot sees your page and how a user sees your page.
Crawling
The process of following hyperlinks on the web to discover new content.Indexing
The process of storing every web page in a vast database.Web spider
A piece of software designed to carry out the crawling process at scale.Googlebot
Google’s web spider.BERT
Initially released in November 2018 and updated in December 2019, this update helps Google understand natural language better.Vicinity
Put into place in December 2021, Vicinity was Google’s biggest local search update in five years. Using proximity targeting as a ranking factor, local businesses are weighted more heavily in query results.
Google’s freshness algorithm
It is 100% about relevance to users. A fresh web page is relevant to a person searching for the latest information. The freshness algorithm promotes new content when it is trending and of the moment. The freshness algorithm is not about promoting recently published content. It’s about promoting trending content.Google Passage Ranking, It Is Not Passage Indexing
In this technology, google will able to better identify and understand key passages on a web page. This will help to understand "surface content" that might otherwise not be seen as relevant when considering a page only as a whole.
Passages allows Google to rank specific, relevant passages from a specific page. Not just the page itself. (Kind of like a souped up version of Featured Snippets) So instead of Google ONLY taking into account the relevancy of an entire page. They’ll now also size up the relevancy of a specific section of that page. The only difference is that a single page now has more chances to rank. That is, assuming the page is optimized and organized.
Yes, Google will rank passages of your page semi-independently.
SMITH Algorithm
SMITH is a new model for trying to understand entire documents. Models such as BERT are trained to understand words within the context of sentences.In a very simplified description, the SMITH model is trained to understand passages within the context of the entire document.
While algorithms like BERT are trained on data sets to predict randomly hidden words are from the context within sentences, the SMITH algorithm is trained to predict what the next block of sentences are.
This kind of training helps the algorithm understand larger documents better than the BERT algorithm, according to the researchers.