July 28th, 2011 Add Your Comments Bookmark and Share

Google were very explicit about this, having bad pages on your website will damage the reputation of the good stuff. One of the easiest ways of losing your rankings on your good pages is to have them share web space with crap pages.

How Do You Identify Pages That Google Doesn’t Like?

For most people it’s common sense, we all have pages on our sites that are very thin on content whilst at the same time being perhaps heavy on links? I know some bloggers are very fond of creating summary posts that link out to dozens of articles on other blogs. These are great for getting people to look at your blog with the added bonus of maybe a link back. However unless you are writing a solid 150+ words about each link Google isn’t going to like it. There is a ratio of unique content to outbound links that needs to be kept to in most cases.

The simplest check I do to see the status of a page is to check its page rank. If it’s been around for a while and still shows as Unranked (not zero) then I take it as given that Google is finding a problem with that page. The next think is to look at the most recent cache date. If it’s more than a week or 10 days old then again I can see it’s not seen with much regard by the big G. Finally I look at its index status. If it doesn’t appear under the pages listed with a site:www.website.com query then that’s very bad. Even if it appears as indexed it may not appear under the site:www.website.com/& listing which strongly suggests it’s being treated as a second class citizen. These techniques along with a good old fashioned hunch are enough to identify the pages that are dragging down your whole website.

How To Deal With Problem Pages

The best solution is to add more content to the page. If you can add enough good quality unique content to a page then there’s every chance Google will treat it with more respect. However there are times when this just isn’t possible or practical.

Can’t I Just Delete The Page?
You could but there’s a chance you could be creating orphan pages in the process. I know it’s rare with modern CMSs and sitemaps but in the good old days it was very easy to leave a page out there on its own, 100% wasted content. Even with WordPress and other CMSs deleting a page or post from your website isn’t a good idea, not only will you lose any link juice that may be associated with it but if your website starts returning a 401 error for that page Google will pick up on it. Bloody hell it’s like treading on egg shells!

The solution I’ve used to huge positive effect is to use the meta robots tag with noindex, follow. This instructs the search robots to not index the page (hey Google I know this aint the best and don’t want you to rank it) but still instructs them to follow any links from that page. In the last 2 months I’ve seen it make a massive difference, particularly on my older static sites that were having major problems post Panda. By not trying to pick up rankings with poorer pages I’ve pulled my good quality pages all the way back up.

How Do I Implement Noindex,Follow?

If you’re working with hand coded web pages then it’s simply a case of adding the meta tag similar to:-

<meta name="robots" content="noindex,follow" />

If you’re working with WordPress then I’d suggest you get rid off All In One SEO and replace it with Platinum SEO, it’s still free but gives you the option of setting a robots meta tag for each page and post.