What Is the Google Search Engine Algorithm Name?

While it is convenient to speak of “the Google algorithm” there is no single Google algorithm. Google has tried to explain this time and again. Even if you only consider the ranking system Google uses many algorithms. If you restrict the discussion further to just the (organic, unpaid) general Web search results you’re still talking about dozens, possibly hundreds of small and large algorithms working together to form a complex system.

Rankings is a really bad concept for modern search engine optimization. People should not be looking at ranking reports (search visibility reports, ranking estimates — whatever you want to call them). Google Search Console can only report average position per keyword and/or per keyword per page because any given URL might be excluded or ranked differently every time the same query is used by different people.

It might be more productive to think in terms of ranking ranges or average position ranges because your averages will vary over time. Google changes little things every week day, 1-5 times per day. Those little things will each only affect a few queries. But there is other stuff going on at the same time.

Every day Google:

  1. Crawls more content (including links)
  2. Filters more crawled content (including links)
  3. Indexes more content (including links)
  4. Drops more content from its index (including links)
  5. Recomputes hundreds or thousands of “signals”

These processes never stop. The so-called “Google search engine algorithm” is a crowd of algorithms. At any given time only some of those algorithms are active, or maybe we should say for any given query.

What Happens When the User Types a Query into Bing or Google?

A picture of a name tag, such as used at conferences, with the name 'The Google Search Algorithm'.

People searching for a single Google search algorithm do not understand that the search system consists of many algorithms, and more algorithms are added to the system every year.

No matter how long you’ve been teaching or learning search engine optimization, if you’re not visualizing a complex query processing system responding to every query, you’re missing the big picture.

Even before you finish your query the search engine may be trying to guess what you’re about to type next.

When you press [ENTER] the search engine analyzes the query. The main query processor hands off that query to 1 or more sub-processors. They do their things and send back the results quickly.

RankBrain is a query sub-processing algorithm. It analyzes the just-typed-in query, comparing its semantic structure and possible meaning to — call it a database — of previously analyzed and resolved queries. According to what Google told us in 2015 RankBrain’s recommended substitute queries were accepted by the main query processor about 30% of the time. If a RankBrain result is selected, Google saves some time in resolving the query.

If the search engine doesn’t substitute an older query for whatever you just typed in, it still has to select candidate URLs that should match the query and then figure out what order in which to show them to you. Along the way it may inject some other stuff (videos, images, carousels, knowledge graph boxes, similar queries, etc.) and each of those collections of things also have to be ordered. Your search results represent the combined efforts of several, perhaps several dozen ranking algorithms.

After the search results have been selected and ranked they may be reranked by what is called a reranking engine (in some patents and research papers). A reranking engine takes the ordered results and suggests an alternate ordering scheme; or it may simply compute a score that is added to the IR scores already computed for the base search results.

Search Engine Results are Always Ordered Mathematically

There is no other way to do this. The computers have to follow some sort of ordinal sequence. You can think of that as number systems (1 comes before 2, 2 comes before 3, etc.) or you can imagine an arbitrary ordinal system (glob comes before qworb, qworb comes before kalrahn, etc.). Either way, some ordinal value must be assigned to every listing in the search results. The SERP may combine multiple ordinated sets of results.

It’s easiest, simplest to assign a mathematical (numeric) score to everything. Figuratively speaking, we can say each URL’s score begins with PageRank. Then the system computes an I(nformation) R(etrieval) score based on the documents’ relevance to the query. Then the system may compute a Situational Relevance score based on the searcher’s meta information for that search session. Situational context could include any of the following (and other stuff):

  • The query prior to the one being resolved
  • The searcher’s browser
  • The searcher’s operating system
  • The searcher’s screen resolution
  • Whether the searcher appears to be moving between geographic locations
  • What language the searcher is using
  • What kind of query the searcher is using
  • The community where the searcher is currently located
  • The brand of the device the searcher is using
  • Whether the searcher is using an ad blocker

All of these things can be quantified in some way and stored in a vector of data associated with you or just the current search session (aka “query session”). These quantifications can be used in any number of ways such as:

  • A boolean value (yes/no, present/absent, active/inactive)
  • An ordinated value (1 for Windows, 2 for iOs, 3 for Linux, etc.)
  • A vector of values

Something, somewhere inside the search system quickly assembles this data. Something, somewhere inside the search system quickly evaluates it, scores it, transforms it, discards it, or does something else with or to the information.

Some or all of it may be used to choose what will be shown to you, how those listings will be ordered, and how they will be structured.

Search Result Listings Are Constructed on the Fly

If you have ever attempted to reconstruct a lost Web page that Bing or Google had cached, then you know you can modify your queries to show you succeeding snippets of text from the page. This is not a 100% reliable method for recovering lost or otherwise obscured text but I’ve been able to reconstruct very large selections of text that were lost beyond recovery from normal backups simply because a major search engine had indexed them.

With just slight changes you can run the same query in multiple ways and see different content for the same listings (URLs). These listings are constructed on the fly at run time, dynamically, and so while most Web marketers believe that they are controlling or influencing these listings via meta descriptions and structured markup, you don’t have nearly as much control as you believe you do.

By the same token, the SERP is also constructed on the fly. Some parts of it may be stored but things change all the time. Even if RankBrain persuades the main query process to substitute the results from an older, “known” query for whatever you just typed in, what if breaking news (that has just been indexed) is still relevant to this query? Google may inject that breaking news. By the same token, the search engine may inject new social media shares or videos even though it’s substituting an older query for the new one.

No matter how much or how little information is actually stored about each query that is a candidate for use as a RankBrain-style substitution, the search system may still alter those results by adding or removing things.

The Various Signals Are Computed by Different Classes of Algorithms

Bing and Google talk about using “signals”. You almost certainly just say “ranking factors” but signals come into play long before any rankings are computed. Some signals may be used by the crawlers. For example, the search engine will determine how fast it can crawl your site, and how much it can crawl your site. To do that it must collect data about your site (number of pages, average response times, best time of day to crawl, size of content, types of content, etc.). All those types of information are “signals” to the algorithms that manage the search engine’s crawl budget.

You don’t manage crawl budget. The search engine manages crawl budget. At best you can influence the budget by making a site superfast, limiting how many URLs it publishes for crawling, and keeping all the content as small as possible.

Some URLs are filtered from the crawling process. The search engine may decide not to crawl them for lack of PageRank-like value, or because they appear to be duplicates of previously crawled URLs (a predictive process that anticipates faceted navigation), or the URLs may be under penalty. If the server is down the crawlers could skip over a part of their queue, or they just log a failure to retrieve a URL and move on.

On any given day Bing and Google most likely fail to retrieve hundreds of thousands, if not millions of perfectly good documents simply because they didn’t (completely) fetch the URLs.

*=> More than one fetch may be required. We know that a crawler, like a Web browser, must issue a separate fetch request for every resource (script, font, stylesheet, image, video, whatever) associated with the page. Documents that rely on Javascript to serve part of their content will only be partially indexed at first and the rest captured and indexed later.

*=> The indexing process occurs in stages over days or weeks. Some of the information extracted from Web documents is queued for further processing in an “offline” or “batch” mode. Even if you use the URL Inspection Tool in Google Search Console to submit a page for immediate indexing, you’ll have to wait for some processing to occur at a later time.

The indexing process may include vector-based analysis of the document’s contents by special learning algorithms. These algorithms may be used to identify new signals to be used in the future for selection, crawling indexing, or ranking. It could be days, weeks, or months before newly identified signals are vetted and approved and integrated into the search system.

There Is No Single Algorithm Name

Returning to the title of this article, I pieced it together from several queries people use. I don’t know if anyone actually searches for “what is the Google search engine algorithm name” but they search for variations of that question.

The people searching for a single algorithm don’t understand just how truly complex Web search systems need to be. Baidu, Bing, Google, and Yandex all share one thing in common: the World Wide Web. It consists of trillions of URLs representing hundreds of billions of documents and collecting, filtering, storing, and managing all that information is a monumental task.

Web marketers should stop talking about “the algorithm”. There is no “the algorithm”. There is no “algorithm update”. Every time you think you see an algorithm update you’re seeing just a part of the picture. The system rarely changes for everyone at the same time but it changes for some people every day, several times a day.

Except for simplistic names like “the Baidu search system”, “the Bing search system”, and “the Google search system” there is no way to name these things.

And Google Just Keeps Making it More Complex

Although I say “Google”, I really mean all the search engines. They are all developing new machine learning systems. And they share a lot of research with each other (through the papers and patents they publish).

The search engines are developing new systems and methods for creating new algorithms. They’re adding, changing, or removing the signals their systems use at a much faster clip than they used to. They can make these new algorithms and signals much more specialized than in previous years.

In other words, the search engines are able to customize their results with far more granularity than you realize. The Web marketing community doesn’t understand (yet) just how flexible the search systems have become. We’ve been discussing these new algorithms with subscribers to the SEO Theory Premium Newsletter. You won’t hear much from me about these algorithms any other way for months.

Meanwhile, you’ll continue to see an unending torrent of “Google algorithm update” sightings. These reports misrepresent what is happening with Google’s search system. While Google unquestionably still makes significant system-wide changes, they have been moving farther away from the “one set of algorithms fits everyone” model for years. That should have been obvious when they said that RankBrain affects about 30% of all queries in 2015.

Why the People Arguing with Google’s Explanations are Wrong

They dispute nearly everything Googlers say:

  • Google says 301 redirects and 302 redirects are handled the same, marketers say “no”
  • Google says subdomains are treated the same as subfolders, marketers say “no”
  • Google says bad links are detected and ignored, marketers say “no”
  • Google says it doesn’t use CTR to determine rankings, marketers say “yes it does”
  • Google says that it doesn’t use domain/host-level scoring, marketers say “yes it does”
  • and so on …

The pattern is very clear. These people have convinced themselves that Google works a certain way and when contradictory information comes to light they say it must be wrong. After all, the marketers couldn’t possibly be wrong, could they?

You cannot use search engine patents to reverse engineer the search system because A) many of the patents people are using were developed for non-organic search processes (mainly the advertising system) and B) you don’t know which patents describe processes currently in use.

If you want to know how Google’s search system works, you have to take the Googlers’ words at face value. They don’t stand to gain anything from lying to the world about how search engines work. And, frankly, what they say is very consistent with the academic literature published around the world. So maybe a few dozen Web marketers are right and thousands of academics are wrong, but I’m going to bet on the academics being closer to correct than the belligerent Web marketers.

We Don’t Need to Know the Names of Search Engine Algorithms

While it’s cool to look over the shoulders of the engineers when they show us some of the things they do, the Web marketing community takes that information into the wrongest of directions so quickly I’m not surprised the engineers hesitate to share any more. It’s better if you accept the general descriptions of the processes at work. Given how much people argue about what those processes do and when they are used, no one will benefit from knowing what the algorithms are called (or why they are named as they are).

The general process is the common ground we all need to understand. The top-ranked Web marketers, the ones most often cited as experts on blogs and forums, are oftentimes the same people arguing against the well-documented process. These people are causing far more problems than they are solving. You don’t know enough about how Google works to tell Googlers that the system really does use CTR for rankings, treats subfolders better than subdomains, and collates “link equity” by domain name.

You really do not know enough to be making those assertions. And when you say these things you make us all look bad.

Follow SEO Theory

A confirmation email will be sent to new subscribers AND unsubscribers. Please look for it!