How to Analyze Websites Like Michael Martinez

Finding a good marketing forum to help you with the challenges you face isn’t easy. Some forums won’t allow you to post a link to your Website, which makes it virtually impossible to get good advice from the people visiting the forum. Some forums go even further into making life difficult for you by preventing you from using their private messaging and by not moderating crude, insulting comments posted by their long-time members. So why bother asking for help in such places?

Because you’re stuck, obviously. And yet in addition to those snobby forums that don’t want your links and which heap praise on abusive members, if you look long enough you’ll find forums and Facebook groups where you can share links to your sites. And hopefully the moderators are better at shutting down abuse, although sometimes it’s the admins themselves who are condescending and insulting.

When you find a welcoming group or forum, you should take a few minutes to relax and compose that message asking for help. The first message usually consists of a very vague description of a site. People have to ask for the URL. I think it would be helpful to write a message in a text editor (like WordPad – which I often use to compose lengthy comments for Facebook discussions).

How to Ask for SEO Help in a Truly Supportive Forum or Group

What do people need to know about your site? If you’re concerned about giving away competitive information you’re going to frustrate yourself and the generous people who want to help you. If you’re an SEO provider respecting a client’s privacy, you should state that up front. Everyone understands that non-disclosure exists.

Do you think you just want to ask an SEO question? A lot of requests for help begin like these questions:

  • Which is better for SEO, subdomains or subfolders?
  • Should I noindex [insert page types or sections]
  • What is the best way to get links
  • Was your site affected by the last Google update?
  • Does Google index [insert page types or sections]

If a question begins with “how do I …” then that may be all the forum members need to know. Someone usually knows how to configure something.

If a question begins with “why does Google …” then that suggests there is an unresolved pain point. Usually when I ask for more information in response to “why does Google …” I don’t need any details about the site in question. Sometimes that proves to be helpful, but “why does Google …” questions tend to come after people conclude they have to accept some circumstance beyond their control. They’re ready to move on but they want some closure; they want to understand why things are the way they are.

Some people may genuinely want to know random opinions, but in most cases after being asked questions in response to their question (“what are you trying to do?”) we learn these are really lead-ins for help with specific Websites. I think it would be better if you just explain your situation and not try to ask one single SEO question.

I think you should provide (or ask the original poster for) a list of details like these:

  • URL (if possible)
  • Niche or industry
  • Whether it’s your site or someone else’s
  • What you do for the site (everything, SEO only, Web design, hosting, etc.)
  • What Content Management System if any it is using (Blogger, Drupal, Joomla, and WordPress are the most well-known)
  • What kind of hosting you use (dedicated server, shared hosting, scalable cloud hosting account, free hosting)
  • What Web server is running (Apache, NGinx, IIS, other)
  • What specific situation led to the request for help

That’s a short list and it may be more than most questions require.

The First Thing I Do When I Look at Someone’s Site

Regardless of whether I’m on a call with a prospect for SEO services or helping someone in an online discussion group, the first thing I do once I get that URL is load the site in my browser. I want to see how fast it is.

Among the many unnecessary and should-be-avoided anxieties people share in Web marketing today is “how to speed up my Website”. Most Websites are fast enough for Google. I’m quite fed up with people quoting statistics at me that I was among the first to find and share with Web marketers. Yes, I know that reducing page load times by as much as 3 seconds can improve sales. But you’re not Amazon or eBay. You don’t need to shave .1 seconds off your page load times after you get them under 3 seconds.

So if I can allay concerns about site speed, I do that as quickly as possible. I’ve had to talk more than a few people off the “my site isn’t fast enough” ledge. Some people just want the fastest sites on the Internet and they are willing to go the extra mile. But for most business owners once your page loads in 3 seconds or less on a smartphone you’re about done optimizing for page speed.

*=> What the visitor sees once the pages loads is FAR more important.

Unbelievably, most Web design companies today create (and sell) perfectly useless site designs. I am not exaggerating. The more I browse the Web and visit “new” or newly renovated Websites, the less useful these sites become. Just today a Web design firm asked for help in an online discussion with a page speed issue. My first thought, before loading the site in my browser over a fiber optic connection, was this is probably a waste of time. But it took the page 2-3 seconds to load. And then I had to wait for a stupid animated (active) image to start playing.

The active image doesn’t do anything but walk you through a blurb, refreshing the screen several times.

If you scroll down the page you see browser-filling frames that each feature a single pull quote. The pull quotes are the equivalent of “we are the best design company”, “we’re experienced at creating unique experiences”, etc. A couple of the screens looked like they had calls to action but I didn’t click on anything.

Web design in general has slid down into the sewers. If I can’t do anything on your page with what I see as soon as the browser shows me something, your page doesn’t exist to me. Your site is useless. You can expect me to leave – so don’t go asking why your bounce rate is so high in online discussions. I just told you why.

The Other Main Reason I Want to See Your Site

After I have formed an opinion on how fast, slow, or badly designed the site is I use the browser’s “View Source” feature. Some people inspect the DOM (document object model) instead – and I do that too if you pay for my time. When I’m giving free advice I’ll look at your source code first.

Source code often tells me things people don’t realize they should share, including:

  • What CMS the site is published on
  • Whether there are XML/RSS feeds
  • What theme the site is using
  • What SEO plugins or modules the site is using
  • Whether there are “nofollow” attributes on internal navigation
  • Whether there is suspicious code on the page
  • What the site structure looks like (does it include subfolders, subdomains, off-site connections?)

A quick glance through the HEAD section of the HTML code on various pages of a site (usually 3-5 are sufficient depending on what you click on) tells me enough about whether the site is “noindexing” or “nofollowing” itself, whether the site is using flat architecture, if it’s publishing “static” HTML content or “dynamic” content via Javascript, and whether it is properly canonicalizing its pages (or using Hreflang and other special markup).

About half the time I find problems in the code, although many are not very serious. I may not even mention some things I see if I don’t think they are serious.

I also look at “robots.txt” files to see what has been disallowed and/or which user-agents are blocked (if any).

So You Think You Have a Duplicate Content Problem

This topic comes up several times a week. I don’t know what blogs or tutorials these people are reading but “duplicate content” isn’t the SEO problem you think it is. There was a time, 15 years ago, when it caused some issues. But in order to earn a penalty (manual action) for “duplicate content” you’d have to be scraping a LOT of Websites and using their content to spam the index. Even then the search engines are more likely to ignore obvious duplicate content – or rank it below other copies.

The only time “duplicate content” is a real SEO issue for your site is when it’s outranking your own pages.

On the other hand, you may have a user-experience issue. The difference between SEO problems and user-experience problems is simple: the search engines are more forgiving than I (the user) am. They’ll list your duplicate content. I may just give up and go away if I see the same thing over and over again on your site.

If you’re thinking, “but the search engines are scoring our user experience” – yes, I agree. But their scoring results in determining which pages are shown in a SERP and which pages are shown first.

If you’re publishing a blog and you’re concerned about unnecessary duplicate content my advice is easily summed up:

  • Only publish excerpts for your posts on archive (“Posts”) pages: Author, Category, Date, Tag, and Main indexes.
  • Don’t use sub-categories if they are merely folded into their parents.
  • Don’t use tags and categories the same way (use tags for keywords and categories for topics)
  • Don’t list every page or tag on your site in your sidebar or footer
  • Don’t list every page and archive on your site in your sitewide menu
  • Write as little boilerplate copy as possible (in both main body text and sidebars and footers)

Follow those 6 points and you shouldn’t have any duplicate content issues.

Technically it’s not necessary to use “canonical” link relations if your intention is to publish unique content. If your site uses faceted navigation then you only need to canonicalize pages that can be indexed by the search engines.

On the other hand if you’re defending against scrapers then embedding self-canonical links may help. Older scraper tools didn’t change the canonical relations, so sites built with those tools collapse all their accrued value (if any exists) into their source pages. If you’re really paranoid about scrapers consider adding a bit of Javascript code to the HEAD section that checks the page’s location and reloads the canonical page into the “_top” context if it’s on the wrong URL.

Having duplicate post excerpts on multiple archives in a blog is not an SEO problem. However if you only have one author or one category, or if you don’t publish more than 1 post per month, you create a bad user experience. I often redirect what I feel are superfluous index pages to a primary source.

You Should Not Use “nofollow” and “noindex” On Your Own Pages

Even 10 years after Matt Cutts explained that PageRank Sculpting doesn’t work the way misguided people in the SEO community wanted it to, people are still adding “rel=’nofollow’” attributes to their own internal links. They hope to channel all of whatever PageRank-like value flows to a given page to only their preferred destinations. It doesn’t work that way. It hasn’t worked that way since early 2008.

What Google does is look at all the links on the page, divide the PageRank-like value by that number of links, and then it only passes value through links that do NOT use the “nofollow” attribute. You’re not hoarding, preserving, or enhancing anything. You’re literally discarding some of your PageRank-like value when you use “nofollow” attributes.

*=> And now people are alarmed by “nofollow” attributes on functional links.

People describe these links with many different phrases. I liked functional links when I saw someone use that phrase. These are the links that point to your Login page, Shopping Cart page, your social media accounts, etc. Often you don’t even know these links are using “nofollow”. They are frequently added by theme develeopers, content management systems, or SEO plugins.

It’s really not worth worrying about. Search engines like Bing and Google find ways to adjust the value they would or might have assigned to links in the old days. Given a page with 100 links, a modern search engine might value some of those links more than others simply because of the way they are used.

Neither you nor I have any way of knowing when or how that value adjustment happens. Nor do any SEO tools exist that can tell you.

It’s not worth worrying about and I rarely mention these kinds of “nofollow” attributes when I see them (usually I only talk about them when someone else suggests or asks if they are a problem).

*=> Too many SEO plugins are telling you to “noindex” the wrong pages

If you ever write an SEO plugin you’ll run into the same issue the current developers must deal with: you have to choose a default value for most settings. Some, maybe all SEO plugin developers tend to add “noindex” to archive (or paginated) URLs by default. They’re not telling you this is necessary for SEO. If that is their intention then they are wrong, plain and simple.

The search engines may not crawl and follow the links on pages using the “noindex” directive. On a blog this could be the kiss of death for your older content, much of which is only linked to by those archive URLs.

You need to understand what your blog is doing and make informed decisions about what should be using “noindex”. I apply “noindex” directives to pages throughout the year, but I never allow SEO plugins to make that decision for me.

Some themes also do this. When you first install a theme on a site you should look at its options and view the source code. Understand what it is generating.

Instead of Crawling Websites, Do This

Most people in the SEO industry now run crawlers against Websites. There are many good reasons NOT to do this, especially for an SEO audit. Here are just a few of them:

  • You’re tying up the server
  • SEO crawlers miss orphaned pages
  • It’s an inefficient way to find simple sitewide SEO problems
  • They ruin server-side analytics reports
  • They may not trigger all the Javascript resources they should (if you’re expecting a crawler to do that)

That said, SEO crawlers help you in these ways:

  • They find broken links
  • They check a lot of standard markup
  • They create URL trees

Ideally you want the flexibility to use a crawler but if I had to choose one way or the other, I would ALWAYS prefer the CMS’ own tools over crawling a site. It takes next to no time to export a list of published URLs from most sites. You’ll get all the orphaned pages that way. And if you review the CMS, theme, and plugin/module settings you’ll see what is “being done for SEO”. You don’t have to wait for a crawler or tie up the client’s server resources.

Working with exported URL lists is more efficient, provides you more complete data, and allows you to prioritize what you want to crawl in case you decide you do need to crawl a site.

Conclusion

We’ve shared many articles in the SEO Theory Premium Newsletter about how to find, diagnose, and solve Website problems. I’m not ashamed to say that I’ve broken a lot of Websites through the years. But sometimes the theme or plugin breaks the site. Sometimes it’s something else entirely. A few years back I kept getting an “ERR_NO_RESPONSE” result when trying to load a page. After hours of rewriting the page and looking for solutions on the Web I stumbled across the problem: I was embedding too much text in a DIV. How do you categorize a problem like that?

When Randy Ray and I begin working on a new client site we go through these steps and others to learn as much about how the site is published. We ask clients for a lot of information. Often they cannot provide everything we ask for. One of the things we hope Reflective Dynamics’ customers take away is a heightened sense of the need for knowing what their sites are doing.

If you don’t know what is in the HTML code or how the pages are all connected together, your SEO provider should explain those details to you in layman’s terms. Business owners don’t have the time to learn HTML code, but they need to understand what it does for and to their sites.

By the same token, anyone who takes on the role of managing SEO for a site should provide as much information as possible when seeking help. Every site is different and you shouldn’t base your SEO decisions on the most popular responses in Web marketing discussions. Many times the most popular responses are wrong or incomplete.

Want More Articles Like This?

Check these out:

10 Minute SEO: What Everyone Should Do and Almost No One Does

How Your Audits and Backlink Research Fail

Basic Crawl Management for SEO