You may have read my previous entry on performing a 301 redirect from Blogger to WordPress, where I wrote a script that will capture all traffic to your old Blogger site and redirect it to your new WordPress site. Ideally you’d want to do the 301 redirects from the web server, but unfortunately Blogger doesn’t allow you to do that and thus, you’re left with the client-side solution: Javascript.
First of all, let me clarify briefly on my previous post and inform you what the core strengths of my script are.
- Client-side redirect will not work without Javascript. Pretty obvious huh? What that means is that search engine crawlers aren’t able to follow the redirect. Nasty limitation in a simplistic sense.
- Strengths of my script are:
- It works.
- You need a client-side solution if you’re on Blogger. No other way.
- It captures and redirects all traffic from your old site to your new site.
That being said, let me now expand on these two major points: the limitations of search engine crawlers and why is capturing traffic important.
Limitations of Search Engine Crawlers
I’ve explained this in the comments of my earlier post on 301 redirect from Blogger to WordPress, but I will formalise it a bit.
As most of you would know, search engine crawlers cannot execute Javascript statements, thus crawlers aren’t able to be redirected to your new website and this sucks SEO wise in terms of a website migration since the crawler doesn’t know where your new website is.
There are two ways to do a client-side redirect: using the document.location object and meta refresh tag. Since the document.location object is purely Javascript, crawlers will ignore this. However, the meta refresh tag can actually be followed by crawlers (since it is HTML) and seems to be a legitimate way to perform a redirect. I’ve written a post on how Google and Yahoo sees meta refresh redirects.
So what seems to be the problem then, SEO wise? Well, if you use a static URL in the content value, this will work fine. The problem is that all the pages on your old website will redirect to that one static URL only, which is not ideal and not user friendly, in fact might be deemed spammy as it was done traditionally by spammers!
I have tried using an empty meta placeholder (<meta http-equiv=”refresh” content=”” />) without Javascript and then tried to insert the content value dynamically, but the refresh won’t work because the meta refresh executes as soon as it is rendered and because it has no value initially, it won’t redirect.
Setting the content value dynamically later on does not invoke the redirect. You have to somehow set the content value before the browser executes the meta refresh, and that is only possible with Javascript. Therefore, search engine crawlers won’t follow the redirect.
Why is Capturing Traffic Important?
Now I hope you can see why from a crawler perspective, client-side redirect sucks. So you must probably be wondering, why bother? Why even try to redirect my old site to my new site when crawlers can’t even follow it, and thus, does not contribute positively to my SEO?
Well, since your old Blogger website has been indexed by search engines and probably has some decent rankings on it, you want to capture the traffic driven to your old website to your new website. By doing so, people won’t be going to your old barren website now that it doesn’t get updated but to your new website, and all your good blog posts are still there!
In addition, traffic from your incoming links by other websites out there are also captured to your new site.
This is a great and effective way to ensure you maintain your old site’s traffic and readership, and to build and improve on your new site. You’re not starting from scratch again!
Oh, and by the way, you might want to add in <META NAME=”ROBOTS” CONTENT=”NOINDEX, NOFOLLOW”> in your old Blogger site to prevent duplicate content issues.