Why Google Does Not Use CTR, Bounce Rate, and E-A-T The Way You Think It Does

Most of the listings (well over 90%) in every query’s search results never receive any clicks. While people constantly argue that you should rank 1st for your targeted queries, most people never click on the 1st result.

A mock up cover for an imaginary Bad SEO Times magazine promoting false ideas such as CTR, Bounce Rate, and E-A-T ranking factors.

Every year many Web marketers embrace bad SEO ideas as if they are buying false news tabloids. We explain why Google is not using CTR, bounce rate, and E-A-T the way some marketers believe.

Search engine marketing isn’t an exact science for a million reasons. Neither is informational retrieval or Web search in general an exact science. The search scientists are still inventing the science even though Web search has been around for over 20 years.

So why do people insist on spreading and reinforcing wrongful beliefs that are easily debunked? Unfortunately, it’s a principle of viral propaganda theory that the more often a false idea is repeated, the more people will believe the idea and subsequently repeat it. The Internet makes it easy, even simple, for scam artists, charlatans, and posers to spread pseudoscientific theories before people cast doubt on them. Your first reaction when you read any great insight or revelation from any SEO blogger or expert should be to say to yourself, “This is complete bullshit!”

You owe it to yourself to require that every attempt to explain how Google works prove itself, not with charts and pictures (which prove nothing), but with clear references and undeniable facts. Even more, you should look for errors of omission, especially the intentional errors of omission. If you’re reading anything about the latest Google algorithm update you should be screaming “THIS IS BULLSHIT!” at the top of your lungs because unless Google announces some major algorithm update, the probability is better than 99% that whatever you are reading is based on a microcosmically small sample of the Web.

One highly proclaimed SEO prognosticator of algorithmic sensations typically uses 2-4 charts from analytics. Google indexes billions of Websites. They process billions of queries every day. But you’re convinced that there must be a major, massive Google update because someone dug up a handful of analytics reports that possibly match your own data. If you’re lucky, you’ll find a couple dozen other people who claim to have experienced whatever traffic loss or surge you see in your data.

Falsely identified Google updates are still relatively rare occurrences. Technically, according to Google, they update the search system 2-5 times per day (I assume that is weekdays, when most Googlers are actually expected to be at work). Contrary to what we know about Google work schedules, many Web marketers assume that “updates” are released on weekends and holidays, perhaps to shield the hard-working Googlers from irate criticism and verbal abuse that is sure to be directed at them. Dozens, sometimes hundreds of angry Web marketers may rise up against Google in fury. If you assume that the vocal marketers represent about 5% of the affected population, responses from hundreds would be significant; responses from mere dozens of marketers is not such a big event.

Search Results Change Constantly for 4 Reasons

Many years ago I wrote about the Theorem of Four Search Influences. Nothing has changed since that time. Web search continues to work in the same way now as then. There are 4 reasons why your search results change:

  1. You change something on your Website
  2. Someone changes something on another Website
  3. The search engine changes its algorithms
  4. Searchers change what they are searching for

You may include changes in backlink profiles in any of the first three influences. You can change your links, other Websites can change their links, and the search engines may change which links they use and/or how they value those links.

At the end of the day, all anyone has to work with is content and links.

Web Marketers Rarely Think about Lag Times

How long does it take Google to process a Web document once it has been crawled? Google’s URL Inspection Tool is spoiling and misleading Web marketers. You can submit a URL for crawling and it may appear in the index within a few minutes. But what you don’t see is that the document has only been partially indexed. Google will still process that document over time.

How long does it take Google’s various internal processes (applications, programs) to parseindex, and analyze the contents of a Web document? It’s easy to guess that any 1 program in Google’s system needs at most a fraction of a second to do its thing on a single Web document. Each program is part of a complicated assembly line of processes. Each program processes hundreds of thousands or millions of documents every day. The assembly line evaluates the documents in a sort of collective wisdom, sorting them into different sub-processes. Some documents may be flagged for greater spam analysis. Some documents may be included in more than 1 index.

All this processing takes time. It could be a few days or a few weeks before Google has fully integrated everything it can extract from a single Web document into all the databases and amalgamated scoring systems it uses. Rarely if ever will it all be instantaneous.

Everything has to be added to a queue. Every queue must be processed according to specific rules. Some processes may generate “offline” batches of data that have to be integrated into the scoring and ranking systems later.

As a Web marketer you should assume that whatever changes Google makes based on your own backlinks and content began days, weeks, possibly even months before you see the effects in your search referral traffic.

Every conversation you have with your clients or business executives about “how search works” should include a brief disclaimer about lag times. “We don’t know when they began processing what data they have about our Web content, or how long it took. We don’t know how much of what is currently live on our sites is being used by the algorithms.”

There is no SEO tool, method, or analytics report that will tell you how much of your live content is affecting your search results. There is no SEO tool, method, or analytics report that will tell you how much of your search referral traffic is affected by content that is no longer available on the Web.

No matter what happens, when you poke Google today, you have no way of knowing WHY it seems to react to that poke.

Click Data Is the Scariest Thing Web Marketers Believe In

There was a time when most Web marketers thought search engines only cared about links. While that was never true, it was a widely held belief. I called that “waiting for the Link Fairy”. Your belief in the power of links resulted in many millions of Websites being penalized or delisted.

Today some Web marketers are obsessed with the Click Fairy. Like the Link Fairy, the magical Click Fairy delivers the perfect search results for you all the time, every time, without any consequences.

*=> Search engines collect very little click data

In order to fairly influence rankings, a search engine must collect click data for every URL in every query. Since searchers do not click on most results in any given query, the search engines cannot collect enough data to use clicks for ranking purposes.

And you’re thinking, “But clicks influence paid search results.” Sure, clicks can be used that way because in paid search – as in advertising – the money is in the clicks. There is no money in organic search results. No one pays Bing or Google a dime to display up to 1,000 results for “how my kids watch tv”. The search engines collect and process a lot of data from the Web just so they can display information no one may click on, and for which no one pays any money.

*=> You can confirm the lack of clicks from your own Google Search Console data

Here is a recent screen capture for one of our Websites. This site receives over half a million visits per year. Google only drives part of the traffic.

A picture of a Google Search Console performance summary chart.

Figure 1: Despite appearing in over 1.5 million queries, this site only earned slightly more than 52,000 clicks in a 3-month period.

You need to drill down into the data from which these kinds of reports are drawn to see just how seldom people click on search results. In this next figure you’ll see that I’ve sorted the data in two ways. Appearing high in the search results for popular queries doesn’t guarantee you’ll receive the clicks you want. (Disclosure: Some of these are “incidental queries” for which we did not create content or intentionally target the queries.)

A picture of two fragments of Google Search Console performance reports shown side-by-side.

Figure 2: This data illustrates how even a popular, well-known, trusted Website is rarely clicked on in search.

That 54% CTR is great to have. Too bad it doesn’t happen on every query where we tend to rank 1st in Google’s search results. Some of our top rankings generate abysmally low click-through rates. The 78% CTR is even better to have, but a much rarer example of high-performance CTR in search.

You don’t have to take my word for it. You should be able to find many queries, even if you must look at page-level data, where your content ranks in the top 5 positions and receives few or no clicks. Ranking well in a query doesn’t guarantee you clicks; being shown at all in a search result offers even less of a promise of referral traffic.

If almost no one is clicking on a Web page for a popular query, why should Google include that page in the top 10 out of 1,000 listings? Click data isn’t going to provide Google with enough of a signal to do anything useful.

*=> People will click on documents they ignored in previous queries

When you or anyone else change a query so that you see different search results, but you’re still looking for the same thing, you’re more likely to click on something you didn’t see before but which was included in lower search results for a previous query. Research papers obliquely mention the way searchers change their queries over time. Search engineers know to expect these changes.

The changes searchers make to their queries are included in what Information Retrieval specialists call “Situational Context”. Few members of the Web marketing community are familiar with Situational Context. It includes all the information available to the search engine about a query. Search engines have been evaluating Situational Context at least since 2007.

Situational Context may include “time of day”, “geolocation”, “device”, “search history”, “screen resolution”, “browser”, “service provider”, and other incidental data you and I normally don’t think about. When we become frustrated with a search result and type in a new query, the search engine saves a record of that activity as part of the search session. They may go back over this data later and try to figure out what we were looking for.

Part of the click-through story is the Abandonment Rate. Searchers sometimes don’t click on anything in the search results. Many Web marketers naively assume that means searchers are not satisfied with the results. The same faulty assumption is made of high bounce rates. Just because people don’t click on search results, or click on multiple listings, doesn’t mean they are dissatisfied with their search experience.

Search engine research papers have cited a range of abandonment rates from about 17% to about 25%. Some SEO studies (of questionable value, in my opinion) also identify similar abandonment rates.

*=> Click data is highly biased.

Researchers have devoted a fair amount of time and resources to analyzing click data across the decades. Here is a paper (PDF) from the mid-2000s titled “Evaluating Web Search Engines Using Clickthrough Data”. The researchers, from Yahoo and the University of Massachusetts-Amherst, tested some algorithms for determining relevance from clicks in a DARPA-funded project. Their introduction included the following paragraph:

The general problem with using clicks as relevance judgments is that clicks are biased. They are biased to the top of the ranking [11], to trusted sites, to attractive abstracts; they are also biased by the type of query and by other things shown on the results page… [11] T. Joachims, L. A. Granka, B. Pan, H. Hembrooke, and G. Gay. Accurately interpreting clickthrough data as implicit feedback. In Proceedings of SIGIR, pages 154–161, 2005.

Your click data may also be biased if you try to influence search through clicks. Hence, search engines have known for a very long time not to trust click data. While paid click search may take clicks into account, they have to devote resources to managing and interpreting their click data. Google removes what it deems to be “invalid clicks” from its data streams, so they don’t bias their scoring and ranking algorithms (or cheat advertisers of advertising budget).

SEO Click Analysis Studies are Unreliable

All of the SEO industry’s attempts to prove that clicks influence organic search rankings are seriously flawed. The fact they are produced by people who first assumed that CTR was a ranking signal should be a major red flag for everyone. Confirmation bias is only the first problem with these studies. Whereas Bing and Google collect all the click data from their search results, no 3rd-party agency is capable of collecting representative data.

Worse, some of these studies are admittedly biased toward paid click advertising data; and some of them are also admittedly biased toward SEO experiments and high volume queries.

Even if you apply machine learning algorithms to a large quantity of 3rd-party data you still don’t have enough data to ensure you can accurately sample the click graph. And machine learning itself is prone to error simply because sampled data can produce patterns that don’t exist in fully populated data.

Instead of being convinced by these studies that CTR is a ranking signal, you should be asking why these people are blind to the serious flaws in their logic. Worse, why are so many other marketers blinded by the bullshit?

It’s not enough to assume that everyone is failing at critical thinking. It’s never as simple as that. A combination of widespread ignorance (few people are expert enough in these kinds of analyses to do them right), viral propaganda principles at work, and naivete combine to reinforce and perpetuate bad ideas.

Bounce Rate Refuses to Die the Painful Death It Deserves

While you may not have seen any great debates about bounce rate in the past few weeks, people still casually mention it when asking for help in SEO forums. As I explained years ago in “The Late, Great Bounce Rate Debate”, your content doesn’t have one bounce rate, it has many bounce rates. Some of your bounce rates look better than others. And none of them are reliable indicators of “user intent”.

In fact, “user intent” is another of those SEO buzz terms that is overused. The Website publisher has precious little information to work with when it comes to identifying and interpreting “user intent”. The search engines are still not very good at it and they have the majority of the data.

Whereas user experience is what the visitor finds on your page (or in the search results), user engagement is merely what the user does. And that engagement doesn’t necessarily reflect their intention. Do they click on a link because they know what they are doing or because they are trying to find out if you have the information they want?

The nature of the query may be sufficient to explain the user’s intention. For example, if you know that every spring thousands of visitors come to your site via “[retailer name] spring shoe sale” their intention is obviously shopping-related. Time to put your best shoe deal forward. But what if they arrive at the same page looking for “reviews of [brand name] shoe”? Does that imply they intend to buy from your company if they like the review? All we know is they are looking for reviews.

The intentions may be clearly interpretable from our side as publishers, but are they actionable as well? Do you want to be the source for the reviews that someone else benefits from? Can you prevent another retailer from capitalizing on the sales your customer reviews led to? Should you even try?

The fact someone leaves a page without taking action isn’t necessarily a bad thing. They may trust your site for information but not for price and after-sales support. Or they may simply not be ready to buy. The bounce rate doesn’t tell you what their state of mind is. Making assumptions about the visitor state of mind is risky at best; doing so on the basis of any kind of bounce rate is just plain bad.

The Bounce Fairy is not your friend. Do yourself a favor and kick it out the door. Let it go sell its dreams to someone else.

As with click through rates, neither Bing nor Google can collect enough bounce data about Websites to determine any information about their quality or relevance to searcher interests. These are not useful signals and you should pay no attention to them.

How Time Travel Plays a Role in Analytics

Time travel is an important factor in the relationship between visitor and Website, and that includes the visitor-search engine relationship. Time travel occurs when visitors use hidden doors on the Web. A hidden door just what it sounds like: a page that is hidden from someone, specifically via your analytics data. So if you have pages on your Website that don’t include your analytics tracking code (and you may omit it to comply with GDPR, for example), then when visitors click into those “hidden” pages and then come back to some tracked part of your site, they have stepped through the hidden door in your analytics.

Hidden doors break a lot of analytics concepts. They prematurely bring sessions to an end. Of course, arbitrary session windows also prematurely end sessions. Whoever came up with the 10-minute or 30-minute session window understood that some people will remain engaged longer than those times, but subsequent generations of analysts have forgotten to play with the windowing definitions.

Time Portal is a special kind of hidden door. Imagine you’re browsing a Website and you have to take a phone call that lasts for an hour. You leave your browser window open. When the call is finished you get up, take a break, and finally 15 minutes later come back to finish browsing the site. To the site’s analytics it looks like you just landed again as they collect new activity data. But all you did was jump through time.

Search engines are as vulnerable to time portaling as any other Website. Anything that disrupts the analytical identification of a single search session, inadvertently creating two or more search sessions from the same body of activity, is essentially a Time Portal.

Another aspect of time travel in analytics is the Time Tunnel. That is where the searcher attempts to recreate a past search, hoping to find a result they saw before but didn’t bookmark. People use Time Tunnels in ecommerce, possibly more often than they do in general Web search. A Time Tunnel may extend across multiple Websites. You might search for a product on Bing or Google, move onto Amazon, move over to Walmart, and then go back to the search engines.

Time Tunnels don’t always lead us back to where we want to be. If you cannot find what you’re looking for in your browser’s history file then you may never recreate that search path again. The content may have been removed from the Web or changed. Or it could have been dropped from the search index, or the search engine could have changed how it selects and ranks content for queries. Time Tunneling is the penultimate signal of searcher frustration. You the Website owner can rarely identify it and you probably have no way to fix it.

And Now People are Waiting for the E-A-T Fairy

Expertise, authoritativeness, and trust have become the new SEO myth that leads marketers astray. Introduced in Google’s Quality Rater Guidelines, E-A-T represents the ideal qualities of certain kinds of Websites. They ask their raters to determine who is expert, authoritative, and trustworthy; based on those human determinations, Google wants to know how well their algorithms are choosing and promoting helpful, reliable Websites.

In other words, Google is using human-identified E-A-T to judge the quality of its algorithmic choices. They need judgment from outside the algorithm to grade the algorithm on its performance.

You’ll never see an algorithmic E-A-T signal because that would defeat the purpose of Expertise, Authoritativeness, and Trust.

On the other hand, Googlers have conceded that they want to promote good Websites for their searchers, and those sites should be expert, authoritative, and worthy of trust. Hence, Web marketers are confused.

And now Google has made things even more confusing by trying to explain (in lofty, highly generalized language) how their algorithms approximate their idea of E-A-T. The white paper did not help, Google. Web marketers who believe that Google is scoring sites on E-A-T, or writing algorithms around E-A-T, immediately jumped on the white paper and proclaimed it was proof that Google uses E-A-T as a ranking signal.

And Google does NOT use E-A-T as a ranking signal.

It doesn’t matter how often Googlers reject that belief. It doesn’t matter how much they debunk it. The same people come back time and again and proclaim their correctness, asserting they finally have proof that the E-A-T- Fairy is real and bestowing blessings upon everyone.

Wikipedia alone proves there is no E-A-T ranking signal. Wikipedia is not only NOT an expert Website, the news media has published many stories about subject-matter experts being driven away from Wikipedia. Although Wikipedia’s community has responded to these criticisms and attempted to adjust their policies through the years to promote better editing, their fundamental principle of allowing consensus from the unwashed masses to make decisions has led to many potentially good articles being edited into mediocrity. In some cases outright false information persists in Wikipedia articles because their rules against “edit wars” favor the people who are clever enough to revert accurate corrections.

Whereas some Wikipedia articles are very well written, even written by legitimate subject matter experts, they lack authority and trustworthiness because they can be changed. There are mechanisms that allow Wikipedia to protect some articles from being vandalized or hacked. Still, anyone can go in at any time and subtly mangle an influential Wikipedia article, potentially leaving misinformation in place for years to come. It happens more often than people appreciate.

For those among other reasons I wrote the article “Why Citations Do Not Make Wikipedia and Other Sites Credible“. No amount of rationalization from well-intentioned Googlers and Wikipedians can change the fact that Wikipedia is an unreliable source of information. Every Wiki site is an unreliable source of information. Now, it’s simple enough to point to highly politicized and intentionally misrepresentative blogs and say they are not reliable. But Bing and Google try to filter out deliberate misinformation. Their algorithms just don’t do a very good job of pushing down questionable content from well-trusted sites like Wikipedia. Wikipedia is the wolf in the fold, a site that should not be trusted which has managed to gain the trust of millions of people.

Which is not to say I don’t use Wikipedia myself. But I don’t trust it. I usually check what I find in Wikipedia against other sources of information when I want accuracy. If I just want a quick reference without concern for accuracy, Wikipedia is better than, say, a political blog that shares highly provocative and possibly slanderous memes.

Expertise, Authority, and Trust are easy to spoof. Googlers occasionally have to apologize for their search results. Sometimes people just push some ridiculous nonsense to the top of search results. When that happens, the search engines usually hear about it.

But there are many parody or joke Websites or articles that somehow find their ways into the public trust by way of search engine. No one links to these sites as authorities, and yet you’ll find them ranking highly in the search results. Is that really a sign that E-A-T is a ranking signal, or does it perhaps reflect the ambiguity of query intentions? Sometimes people WANT to watch Alec Baldwin lampoon Donald Trump.

There is a famous meme, explained in this Snopes article, based on a real picture of 25 timber wolves walking through the snow. The meme deliberately misleads people about how the wolf pack is lead by old, weak members and followed by a lone alpha. People still search for this meme and ask if it’s real. So while you could argue that Snopes is an expert, authoritative, and trusted site, I have many friends who are political conservatives who distrust Snopes and usually react negatively when the site is mentioned by our liberal associates in whatever political debates are raging. The division in opinion between conservatives and liberals ranges all across the Web. Alex Jones’ Infowars Website was highly trusted by conservatives but finally driven into near obscurity due to outrage over its false and misleading, even racist and intentionally hostile content.

Does liberal thinking really define what is expert, authoritative, or trustworthy on the Web? Academic researchers have increased their scrutiny of how fake news and misinformation campaigns influence people. Their findings are somewhat more critical of conservatives than liberals, but liberals are prone to making errors of judgment, too.

Research shows that people will find ways to support their false beliefs. That is what is happening with E-A-T. No matter how often it is debunked, the E-A-T advocates come back and reinforce their mistaken ideas with new arguments.

E-A-T is threatening to become as big and dangerous a problem as PageRank Sculpting, and people should take that warning very seriously. While redesigning Websites to be more authoritative and trustworthy is a good goal, and creates a better user experience, no one should be doing so on the promise or expectation of improving their site’s relationship with Google.

Final Thoughts

The SEO community has been working together for over 20 years. I am sorely disappointed to see people making the same bad judgments over and over again. Bad information only spreads as far as you’re willing to take it.

Don’t LIKE, heart, Retweet, or repost some SEO article just because it proclaims a new Google algorithm update. These people see algorithm updates in their own shadows. They don’t ever discuss historical data, compare the trends they believe they see to other trends, or accept that their initial ideas about CTR, bounce rates, and E-A-T were wrong. They just plunge in to the waters of SEO Bullshit with wild abandon and start spinning new fairy tales.

And at the end of the day we all look bad because of these nonsense theories. All of you need to push back and stop doubting the facts. No matter how excited and exuberant these E-A-T experts are, they don’t know what they are talking about. No matter how much you like them as individuals, you need to stand up and say, “Sorry, that’s been proven wrong time and again. Let’s move on.”

Follow SEO Theory

A confirmation email will be sent to new subscribers AND unsubscribers. Please look for it!