How to Manage Website Subduction

In geology subduction is the process by which the edge of one tectonic plate slides under another tectonic plate. Subduction is one reason why the Earth is considered a geologically active planet. It is by the process of subduction that continents and islands gradually move across the Earth’s surface.

Website subduction works in a very similar way. Older content is pushed deeper into a Website by newer content. This process unfolds in a number of ways. You’re familiar enough with blog archive pages to realize that they constantly push older Posts down as new Posts are published. But Website subduction occurs in other ways, too.

The most common method of subduction is to remove links to older content from sitewide navigation.

Restructuring site navigation, folding categories into other categories, and the mere act of moving or archiving content contributes to subduction. When we broke up SEO Theory into two hosts last year, we changed the direction of subduction for much older content by shifting it over to the a2006 archive. That content has now become fossilized for all intents and purposes. It won’t subduct any further. You could say that archived content is now fully subducted.

Subduction is Not Destruction

Although removing content from a Website may be a subductive process, the content must go somewhere or it’s not subduction. Whereas in geology the subduction process is said to destroy crustal plate lithospheric material, Website subduction only means the content is buried deeper in the site’s structure.

When you destroy content you’re just deleting it from the Web. A geological equivalent would be a disintegrator ray vaporizing vast sections of a crustal plate, leaving nothing but a huge gaping hole in the Earth’s surface.

Even if you resurrect content years later, that’s not subduction because it was simply gone for all that time.

Subducted content is merely buried deep, almost hidden away. In fact, I would consider orphaned content to be subducted unless it achieves (or retains) visibility on its own merits. If old content has many links pointing to it and it ranks well in numerous queries it can take on a life of its own, acting as an island.

Subduction Is a Normal, Healthy Web Process

While it may sound scary, if anyone ever starts selling a Subduction SEO method or tool, it will probably just be another scam like E-A-T optimization, rank checking, and guaranteed 1st position rankings.

I can’t imagine a rational way to define subduction SEO but I’m sure someone will think it’s a cool expression. You don’t optimize a Website through subduction. Subduction doesn’t improve or degrade the quality of a Website. Subduction just happens.

Subduction is like link rot, where a percentage of old links go dead every year. The original estimate for link rot put it in the neighborhood of 12-15% per year. I haven’t seen any recent studies on link rot but that still sounds about right.

Subduction happens more frequently on social media sites. There are two rates of subduction for social media content: account-level and search-level subduction.

In other words, as you publish more Tweets it becomes harder for people to find your old Tweets merely by scanning your timeline. But it also becomes harder for them find your old Tweets by searching for them via Twitter’s site search because so many other people are Tweeting.

Some content is easily found for non-competitive terms but most people Tweet about topics that enjoy at least a brief popularity.

Not All Subduction Is Chronological

Just as Google pushes link-rich content toward the top of its search results, social media platforms like Facebook and Twitter deploy algorithms that push certain posts toward the top of their search results, too.

Rankings are more volatile with Google. Facebook and Twitter still appear to be favoring something closer to the “One Ranking For All” theory of content subduction.

Social subduction is based on some measure of value akin to popularity, but refined by chronological age and possibly by user interests.

Auction-based subduction occurs where rankings are altered because of a bidding process, such as with advertising systems. But not every auction-based prioritization method is about advertising. Unpaid search engine rankings are based on an auction-like process. And some site search tools do something similar.

Any ranking system that integrates multiple scores into a final score uses an auction mechanism. The scores are added or multiplied together to produce a context-final ranking score, which is the “bid” in the auction process.

Search engine optimization seeks to improve the bids for Websites.

Website Subduction Only Occurs on the Site Itself

If you want to talk about subduction in the search results then please refer to it as SERP subduction, because it is the search results page that is experiencing the subduction, not the Websites moving up and down in the SERPs.

In fact, host subduction would be a more accurate label for the process but most Websites only publish content on one host. Because SEO Theory is now published on multiple subdomains this site is experiencing host subduction on the main blog. The s2006 archive isn’t receiving any more content so it’s not subducting.

But let’s just call it Website subduction because we all know that sooner or later someone is going to misuse the label anyway. Better not to complicate everything too soon.

Now if any of this sounds vaguely familiar, in 2012 I wrote an article titled “Coping with Link Viscosity Inflow and Outflow”. Link viscosity is the tendency of a page to “resist” any change that results in (automatic) removal of an outbound link. I’ll explain how these two concepts (link viscosity and Website subduction) are related below.

Why You Should Manage Website Subduction

If you’re publishing content on a blog-like CMS, where at least some new content is published as Posts that are listed on archive index pages, then you’re dealing with automatic subduction.

If you’re manually adding links to a list and removing them or paginating them, then you’re dealing with manual subduction.

Either way, the subduction can be attenuated by adding more links or listings to the trunk. If you’ve been publishing 10 Posts per index page on your blog, you can attenuate or slow the subduction by doubling the number of Posts (or Post excerpts) on your index pages. In other words, when you increase link viscosity you slow the rate of subduction.

I used the word trunk because you can think of a multi-index Website as a kind of information tree. If you have one category that contains more Posts than all the others, you can rebalance the tree by recategorizing some of that content.

All the Posts will still be listed in the main or trunk index.

Long indexes are harder to crawl, take more time to crawl, and spread your site’s PageRank-like value too thinly. And yet the worst thing you can do is add a “noindex” directive to a unique index archive’s pages. The search engines will eventually drop those pages from their indexes and may stop crawling the links. If they don’t crawl the links then you lose the PageRank-like value they should have been passing through your Website.

While you can create alternative link pathways throughout a Website – and should – removing the index archive pages from search results accelerates subduction.

To put it another way, as Website subduction increases the efficient flow of PageRank-like value decreases.

Once you see how devastating subduction is to the flow of link value throughout a Website, suddenly a lot of slowly decaying rankings and traffic reports begin to make sense. Perfectly good “old articles” don’t necessarily become less relevant or useful to the queries they targeted. They may simply be starved of the PageRank-like value a site once flowed to them.

The search engine algorithms are designed to take your site’s priorities into consideration. The fewer internal (indexable) links you point to a page, the less important that page is going to seem to various algorithms.

How to Offset or Reduce Website Subduction

The first and easiest fix for a site with extensive subduction is to expand the number of links per trunk or category page. Opinions vary on how many links per page is a good number. Everyone has an experiment they’ve run that contradicts someone else’s claims.

Deciding how many links per index page is more challenging on a site with thousands of pages compared to a site with only a few dozen pages.

If you feel like you’ve overloaded your index pages then splitting them up into multiple sections or categories is your only alternative. I don’t mean paginate the index. I assume you’ve already done that. What I mean is taking some content from red shoes and moving it to mens footwear or womens footwear.

That will only take you so far. You need to look at how unwieldy page navigation becomes as you add more categories or sections to it. Some people wrongly believe you should flatten the site architecture with a massive sitewide menu. I disagree.

No one is going to click on all those links. You should create a navigational system that takes the context of the visitor’s location in the site into consideration. Just because you can embed 100+ links in the sidebar doesn’t mean you should. Each main section of a site can have its own navigational widgets. On large sites that creates a much better user experience.

But even as you find ways to flatten the site’s architecture across different zones you’re still faced with a dwindling amount of PageRank-like value flowing down to the oldest or deepest content. You can try to compensate by creating a solid grid-like navigational system but then you’re playing with Peanut Butter SEO, where you’re trying to spread PageRank-like value too thinly.

Now, one benefit of spreading your peanut butter so far is that different pages (or archive indexes) have different rates of subduction. That is, some pages or archives are more viscous than others. You can buy yourself some time by dividing your content across multiple highly viscous indexes.

You Need Multiple PageRank Entry Points

In a 2007 article about Backlink Theory I described points of entry. On the Web any page or URL acts as a point of entry if a link points to it from another site.

We use points of entry to channel the flow of external PageRank deeper into Websites. That is, 15 years ago nearly everyone was adamant that you had to get all external links to point to the root page of a Website. People were furiously trying to boost their Toolbar PageRank even though it didn’t matter to the search engine as much as the Web marketers believed it would.

In those days it was common for sites to starve their deep pages of PageRank-like value because of their backlink profiles.

When I first built the Xenite.Org domain I brought four previously existing sites together under an umbrella brand. For several months I didn’t even have an index page for the domain’s root URL. You could have just landed there and clicked on a link to one of the four sites.

Each of those sites had already earned links from other sites, and they continued to earn links on their own. Even after I began integrating everything on Xenite into a unified design the sub-sites continued to earn their own links.

When I started learning about search engine optimization the first real challenge for me was to understand why content kept dropping out of Inktomi’s index. They rebuilt it every month and the only pages that seemed to stay in there consistently were those pages that were earning links from other sites. These were the former root URLs for the previous standalone sites, by now just “section roots” or whatever you want to call them.

By the time Google caught my attention around 2000 I understood enough about link value flow to see that Google was rewarding my deeper content better than it was rewarding other people’s deeper content. And the reason was that I encouraged people to link to the deep content, whereas nearly everyone else had a fit if you linked to anything other than the domain’s home page (root URL).

Points of entry have always been important to crawl-based search engine. Every hallway page that Altavista spammers created served as a point of entry to their sites’ deep content. Hallway pages were the real archive index pages even though they looked awful.

I would rather that you link to SEO Theory’s Game Theory category root than to the domain root itself if you want to tell your visitors there is a Website that explains how to use game theory for SEO. To me that is as basic a linking strategy as breathing is to living.

Any page that earns inbound links is a point of entry to your site. Whether you buy or build the links yourself doesn’t matter. If there are links pointing to the category root it is a point of entry.

You want the crawlers to find these points of entry via links on the Web. This is why it is so short-sighted to automatically use “noindex” on all your archive pages. It makes sense to blot out the author index or the default category index if you only have 1 author or 1 category. People are not going to link to “admin” or “uncategorized” – at least not by intention or desire.

If you have several writers on your site and one is more popular than the others, that most popular writer is more likely to attract links to his or her profile page. That page should be indexable and it should point followable links that person’s articles.

Sometimes all that stands between old content and oblivion is an author’s index archive

HTML Sitemaps May Help – XML Sitemaps Do Not

There are two differences between HTML Sitemap pages and XML Sitemap files.

The HTML Sitemap page should be designed for human use.

Only the HTML Sitemap page will flow PageRank-like value to the pages it links to.

Admittedly modern browsers may turn XML Sitemap files into clickable index pages but they look ugly. And, frankly, you should NOT encourage people to link to XML sitemap files.

If you find non-descript inbound links pointing to XML sitemap files you should redirect those URLs to crawlable, indexable pages and rebuild the XML sitemaps on new URLs. Now, if someone has deliberately linked to an XML sitemap file as an example of how to create an XML sitemap file, accept the link graciously and don’t worry about the PageRank-like value. Try not to think about it. Don’t ruin a user’s landing experience just because you’re afraid you’ve squandered some PageRank-like value.

If someone was happy to link to your XML sitemap file, you must be doing something right. Maybe they’re creating other links for you, too. Look before you redirect.

Search engines may not prefer HTML sitemap pages to XML sitemap files. But they may prefer them. You never know.

If a site is so large you’re wondering how the PageRank-like value is flowing through all those pages, that is a sign the site could benefit from an HTML Sitemap page. People really do click on these pages. And the click on links in HTML sitemap pages.

Conclusion

While everyone is fussing over imaginary E-A-T algorithms and signals, Websites continue to batter themselves in the SERPs by pushing old content away from the points of entry. It’s not that your content needs to be “3 clicks from the home page” or some other amateurish design philosophy. It’s that you need to think about how and why people would or should link to any specific page on a site.

Subduction changes the equation in ways most people don’t realize. By the time they see the effects of the subduction they’ve already seen changes in search referral traffic patterns. They often conclude for incorrect reasons that their old content has outlived its usefulness.

It’s quite possible that articles you wrote 10 years ago are no longer relevant or useful. If you never link to them again that’s probably a good sign they were not useful enough.

But maybe you should be linking to that old content because it’s still link-worthy. If you’re not going to do that for your site, though, then why should anyone else?

Once you see subduction in action you’ll know that it’s time to do some basic optimization. But the solution is not always as simple as deleting old URLs.

Follow SEO Theory

A confirmation email will be sent to new subscribers AND unsubscribers. Please look for it!