How and Why We Broke Up SEO Theory Into 2 Sites

SEO Theory is now running on two sites. One is a subdomain. This division of content was necessary for several practical reasons. My decision to archive older content on a subdomain wasn’t based on anything to do with SEO, but there will be some practical SEO benefits from doing so. I don’t want to dwell on the “subdomains versus subfolders” controversy. You can believe whatever you wish. Let’s move on to what we did here and why.

As everyone knows, we’ve been using the same theme for many years. I’ve received more than a fair few comments about how “old school” the site looked. There is no such thing as a “magic theme” in search engine optimization. When we first moved SEO Theory to its own domain in 2007 we installed a theme called “Just Simple 10”.  We didn’t like that so when we redesigned the site later that year we used the Thesis theme. At the time it was immensely popular with Web marketing specialists and many designers. To be honest, although I liked the way the site looked, I hated the way Thesis did things.

When I left Visible Technologies in late 2010 they allowed me to take SEO Theory with me. In 2011 I redesigned the site and switched the theme to Magazine Basic. Despite a few ups and downs through the years it has proven to be a very reliable, flexible theme. However, when I learned that designer Chris Bavota hold sold his theme design business, I realized the theme hadn’t been updated since the new owners took control of the assets. I assume they are leveraging Bavota’s work to sell more of their premium themes (although Magazine Basic has a pro-version, too). The lack of updates signified to me that I should begin looking for a new theme.

SEO Theory Has Grown Long in the Tooth

You’d think that swapping themes on “a blog” would be rather simple. My partner Randy Ray and I manage around 200 Websites for ourselves and clients. We swap themes out on 1 or more sites at least once a month. I test new themes every month. Most of them fail to meet my needs for various reasons. But we’ve got a list of 15-20 preferred themes we like to work with.

Out of that list, only 1 theme ever allowed us to do most of what we need to do with our largest Websites (Xenite.Org and SEO Theory). Prior to this weekend’s redesign there were over 1,000 articles on SEO Theory. Xenite has even more content.

I’ve read a lot of Web marketing blogs and presentations about site design and optimization. I’ve never once seen anyone tackle the HERD of elephants in the blog SEO room. That is -> what do you do with a lot of content?

For people who manage ecommerce sites 1,000 articles doesn’t sound like much. The largest site I ever consulted on in any capacity had 10 million pages at its peak size. I know how cosmically small 1,000 articles feels when you manage a truly large Website. At one time Xenite.Org had 50,000 pages of content. That was years before WordPress came along and when I finally decided to install WordPress on Xenite in 2011 I deleted 20,000 pages of content.

As Boromir might say, one does not simply publish thousands of pages of content on a WordPress blog.

Yes, it’s been done. I know people who have done it. What they had to do behind the scenes to make it work was awesome, mind-blogging, resource-draining, and a lot of work.

Even so, I occasionally — okay, like 2-3 times per week — see people in online forums complaining about how their older content isn’t working and “is it okay to delete it?”

With 1,000+ articles on SEO Theory, I feel your pain. Actually, most of that old content still receives traffic. I’ve randomly HIDDEN some old posts that were obviously no longer relevant to anything (who needs a review about failed Google-killer “Cuil” in 2019?). I’ve even rewritten a few old articles. But with so much inventory and so little spare time. I’ve had to let all those old posts sit and collect random visits.

Why You Should Not Simply Delete Old Content

I understand where people are coming from when they say, “These old posts don’t get any traffic.” I’m sure I’ve got some posts that no one has looked at in years. Maybe. Okay, maybe I don’t feel your pain. But then I do wonder what you wrote that no one is looking at. It seems like every time I want to find an article from 2019 Google insists on showing me results that are 6-10 years old.

Maybe no one is actually reading these old articles, but are they seeing them in the search results? That’s an important question for anyone who must manage brand visibility. You might be able to leverage that visibility in ways that don’t count clicks as conversions. If nothing else, change the page titles so the branding is more prominent.

Old content may also have attracted once-useful links. While it’s not clear how much those old links help, if people trickle into your site by clicking on old links they will be very disappointed if you delete the content. Even if you redirect to “something relevant” it won’t be whatever they were expecting to find. I’m very reluctant to create a bad user experience just because “no one is reading the page”.

Depending on the topic, once in a blue moon some ancient article becomes highly interesting to the Internet. It attracts new links, visits spike, and the advertising revenue goes up. Well, that happens for us. Your mileage may vary.

There are myriad reasons NOT to delete old content.

Good Reasons to Get Rid of the Cruft

On the other hand, low quality content hurts your SEO. We’ve all learned that lesson the hard way, thanks to Bing and Google’s algorithms. I didn’t rebuild Xenite.Org in 2011 because I had free time and nothing to do. I did it because I wanted to revitalize the site. It did okay. I won’t say no one was looking at the old crap content. Thousands of people were viewing it every month (I have no idea of why). It was garbage, randomly aggregated from news feeds (mostly). And keeping the scripts working was a time-consuming pain the neck. Literally, a pain in the neck. I had just updated 100 Perl scripts when I finally stretched my tired muscles and thought to myself, “Do I REALLY want to do this ever again?” And then I deleted the entire site.

If your company changes business purpose, or you register a previously used domain for a new purpose, there is no real good reason to keep the old content live.

Deleting old stuff makes some people feel good. I consider that to be a valid reason to delete content. If you don’t feel good about your Website then why are you publishing it? If it’s not something you’re proud of, change it.

And some people claim their sites do much better after they delete old content “that no one looks at”. I believe you. Please don’t argue with me in the comments. Stuff happens. But no one should be deleting content “for SEO”. That’s taking a huge gamble. If you weigh all the pros and cons and believe that the aftermath of the mass deletion will be positive, then do it “for SEO”. But don’t do it because some random person in an online forum says it will save your Website from sliding down in Google’s search results.

Basic WordPress Cannot Manage 1,000 Articles Well

The problem with SEO Theory’s inventory was simple: there were too many articles. In order to keep everything in the search indexes (and, technically, I couldn’t) I had to build a lot of internal links to the old content. And I needed sitemap files. And it helped to earn links from other Websites. And yet the search engines continued to “retire” some content.

Years ago I explained that link viscosity is a huge problem for blogs.  I opened that article with a simple statement: “Link viscosity is the resistance of a linking Web page to change which results in the removal of an outbound link from the page.” Put another way, link viscosity is a natural effect of pagination. In other words, you either take something off a page when you add something new or you make a bigger, longer page. I’ve done it both ways.

Blogs and Web forums use a combination of scrolling and pagination for their basic designs. The blog or forum owner decides how many things (article, discussions, posts, etc.) are embedded in or linked from an index page. WordPress calls these “Posts pages” (which is really confusing because a blog publishes both “Posts” and “Pages”). I usually refer to “Post pages” as index archivesmain index archivesauthor index/archivescategory index/archives, day/month/year index/archives, and tag index/archives so people know I am talking about the pages that show all the post excerpts (or copies of full posts) in published sequence.

These paginated indexes annoy a lot of marketers. They also annoyed the Core WordPress developers who introduced infinite scroll (a nightmare of a user experience). Infinite scroll screws over your blogs. You really want those paginated indexes.

The thing about index pagination, however, is that older paginated content is pushed deeper and deeper into a Website. Those pages will be crawled less and less frequently. It has nothing to do with “crawl budget” (over which you have no control). It has nothing to do with the age of the content. It’s all about the crawl path leading to the content.

The worst possible crawl path for a paginated index is to use two links: “Older posts” and “Newer Posts”. You’re creating a long, long chain of relatively isolated links. At one point SEO Theory had way over 100 pages in its main index. There was no way I could use a theme that only embedded two index navigation links in every page.

But you don’t want to link to every page in the paginated index from every page. That creates a flat site architecture which also screws sites. Magazine Basic by default linked to the first few and last few URLs in the paginated index. Hence, every page in the index archive received at least 6-8 inbound internal links. The crawlers had something to work with.

But with over 100 paginations in just 1 index archive, you need a lot of crawling to keep things happening. The search engines eventually grow tired (so they say) of digging deeper into a site. Someone once asked Matt Cutts if there was a limit to how deep a crawler would go. He hedged on the answer because it wasn’t a simple “X pages and you’re done” point. If other sites are linking to your deepest, most ancient page that page could receive a lot of crawl without the help of your internal links.

One simple way to reduce the number of pages in an index archive is to increase the number of posts per page. As long as you only publish excerpts this doesn’t create such a bad user experience. But I don’t like creating huge, massive “Posts pages”. I have settled on 12-24 posts per page as a reasonable limit. When SEO Theory had only 6 posts per page I was able to cut the index archives in half by doubling that number to 12 posts per page. (I also implemented redirects from the old paginated URLs to the HTML sitemap page).

But WordPress doesn’t give you many options for managing this much content.

Using Menus to Compensate for Limited Internal Linking is Bad

Some people swear their massive dropdown menus containing dozens or 100+ links are “great for SEO”.  That’s a very hard point to prove. But what is easy to prove is that those kinds of menus create a bad user experience. And it’s even worse if the CSS is so sensitive it activates links before the user has moved the pointer to where they want it to go.

WordPress’ menu system is okay for smaller sites but after you get up to about 100 URLs of content the menus feel very limited. You can install plugins and use shortcodes to create secondary menus, but the WordPress developers really do not get basic large Website design. Every content management system should (by default) provide a native ability to use more than 1 menu per page, to create section-specific menus, and to ensure that menu styling is simple and uniform. We should not have to dig into code or install modules to override default behavior.

Worse, with Google pushing everyone to a “mobile-friendly” Web (not actually needed for about half of all Web users) and user experience specialists demanding we stop using hamburger menus, people turn to their page sidebars for secondary menus. Your responsive site pushes that sidebar down below the main content. The crawlers may see the links but a lot of people never will even if they read all the way to the end of the article.

Mobile Web design is a disaster in every way imaginable. It’s a horrible user experience in general and all the currently available solutions (especially many pushed by Google via “AMP” and “Stories”) make viewing content on the Web damned near impossible. Perhaps you feel I exaggerate but how many times have you had to flip your phone to a 45-degree angle because the video kept flipping upside down? I stopped counting at a million.

Menus are important to the user experience. They are also important for managing crawl (not “crawl budget” – just plain old unadjectivitizing crawl).

You need a good menu system on a blog to help your visitors and the crawlers “see” there is more content than whatever page they are on. People will click on links that look interesting and useful. Heck, they’ll even click on the ads. Go figure.

I’ve experimented with several alternative menu options in WordPress. None make me happy although Content Aware Sidebars comes close (when it isn’t deactivating its custom/bespoke sidebars after an update). There just aren’t any very good large site navigation solutions for WordPress. If you have the time and budget you can engineer one. I don’t have the time and I’m too cheap to pay someone else to do it for me.

The bottom line here: The more content you publish on a WordPress site, the less useful the menuing and sidebar systems become.

Publishing In-Content Links Helps But Is Inefficient

Many of you install “related posts” plugins in your blogs. I’ve experimented with quite a few myself. They’re awful. They’re all just very bad at picking good, related content.

And so I resorted to publishing hand-picked “See also” or “related” links on this or other sites. But those links must be maintained. Every time I have changed Permalink structures I created or added redirects to the site. Up until this weekend SEO Theory was using several hundred redirect chains with at least 3 hops in them.

The good news is that I can safely claim that multi-hop redirect chains don’t impede the crawlers. But I was using those old links as a gauge for how often I’ve updated pages on this site. I never stopped finding links using the 2007 Permalinks. I just couldn’t get to them all even after 12 years of constantly fiddling with the content.

Basic SEO Is All About Crawl

If you forced me to boil every SEO strategy I’ve devised down to 1 critical “can’t succeed without it” element, that would be crawl. Everything on the page must be crawlable. Everything the page links to should be crawlable. Without the crawl you get nothing.

I’ve been writing about crawl management since the 1990s. I’m still writing about basic crawl management for SEO. I laugh and roll my eyes every time I see someone say, “SEO has changed so much after all these years!” No it hasn’t. You just learned more about how to do SEO. The basic stuff is the same.

SEO Theory had become hard to crawl. Well, okay, Bing and Google have enough links that their engineers would probably say, “Oh, we’re doing just fine”. But my point is both search engines were dropping content from their indexes. And that’s not their fault it’s mine. It’s my job to ensure that good content is found and to encourage the search engines to index it. They weren’t dropping good articles because they were low quality. Well, they shouldn’t be (but I’ll agree the algorithms are not perfect). I know that SEO Theory has become hard to crawl.

So how do you improve crawl? You must do that by breaking up a Website. But that is not as simplistic as it sounds. You can break up a Website without dividing everything into subfolders or (gasp!) subdomains. A “website” is a theoretical construct – there isn’t a good definition for what constitutes “a website”.

Breaking up a Website consists of creating multiple entry points. The entry points should but don’t necessarily have to have inbound links from other sites. Ideally if every page earns external links then every page on a site is an entry point. But it’s easier to promote or link to designated entry points on a Website. People rarely think about “how we can get links to all 1,000+ articles”; instead, they tend to think about “how can we get more links to this domain or this linkbait article“. So you naturally designate entry points for your (client) Websites. There’s nothing wrong with that process.

The usual argument against breaking up a Website is that you’ll create more work for yourself. The state stipulates the point, Your Honor.

That said, pruning old content creates an opportunity for you to improve the visibility of that old content if you graft it onto another site. Ideally you want the graft to assume its own brand-quality value. But more importantly, you want the content to be found and crawled by the search engines. Well, I do. Some people have told me they moved old content to archives that couldn’t be crawled to prevent the search engines from finding it.

What We Did On SEO Theory

Randy Ray and I have agreed for a long time that SEO Theory needed a new design. It just needed a theme that could handle the content. Now, if you’ve worked with a theme that easily handles a lot of content, great for you. But every theme I looked at that appeared to be up to the task had one serious flaw: it required a lot of administration. I don’t spend much time configuring any one Website because I have to configure so many of them. So the more work a theme creates for me the less likely I will use it.

I realized earlier this year the only way to get the site back on track would be to archive a lot of the older content. An archive will serve the needs of all that untargeted residual traffic. It gives us an opportunity to promote the SEO Theory Premium Newsletter and our consulting and premium content services. Heck, an archive site promotes the main site, too.

All we needed to do was pick about 900 or so old blog posts to move to the archive. That was the easy part. Updating all the old redirects, implementing new redirects, and taking care of other minutiae (all to be detailed in the next newsletter) consumed a lot of my time.

I went with a self-hosted WordPress blog on a subdomain because I wanted to keep the database separate from the main SEO Theory database. Hopefully neither site will ever be hacked but if one is hacked the other may not be. But large databases can also occasionally be corrupted. If we don’t add any content to the archive then that database will require less maintenance. The main database is much smaller than it used to be so maintenance should require less time.

The new theme we chose is called MimalistBlogger by Superb Themes. We’re using it on at least one other site. Both Randy and Ray and I independently thought it would be a good candidate for the new SEO Theory design.

Couldn’t we have done this in a subfolder? Yes. We could have installed WordPress in a dedicated subfolder. We could have created another database. We wouldn’t need another SSL certificate. As far what the Web browser shows and what the CMS needs, everything you can do in a subdomain you can do in a subfolder, and vice versa.

But the subdomain will be able to take on a life of its own. It will be a living experiment from which I expect to learn interesting things in years to come.

And I have to admit, I love using subdomains to annoy over-confident Web marketers who have convinced themselves subfolders work better than subdomains.

The important thing was to create another Website. We needed to create an entirely new site structure. It’s “new” in the sense that the older, deeper content will be easier for the crawlers to reach in the archive than it was on one site.

Also, by setting up redirects from the main site to the archive we’ll (hopefully) gently nudge the crawlers to find the archive and index it. We haven’t done anything special about removing old URLs. We’re not worried about canonical problems. People who find the old articles in the search results can still click on the links and (assuming I did the 301 redirects correctly) end up on the correct page in the archive.

We’ll probably do a few more things to assist with crawl in the coming weeks just to ensure the archive is properly indexed in a timely basis.

Meanwhile, we don’t have to struggle so much to get the crawlers to the most important content. The lean SEO Theory now has plenty of room to grow.

About the SEO Theory 2006 Archive

You can search the archive from the main blog. Look in the sidebar and at the bottom you’ll see a search box widget. If you’re reading this article on a mobile phone you’ll have to scroll ALL the way down to the bottom of the page. Sorry.

The archive is located at a2006.seo-theory.com. The subdomain name is meaningful but I admit I chose it because of a recent online discussion where someone asked why some Websites use “www2.domain.tld” host names, and I was sorely tempted to do something like that. People should not agonize over host names. Go with what you’ve got.

For now the archive is still using the Magazine Basic theme. At some point we’ll probably replace it, but I feel we still have time to look for another theme. The archive will always look different from the main site to avoid confusion about where you are. We have no plans to add more content to the archive. When the time comes we’ll create another archive.

And thanks for reading SEO Theory, whether you’re new or coming back after all these years.