Sunday, June 7, 2009

Block duplicate content and use rel='canonical' again

seo for blogspotI post this , which I will try to answer for question on Block duplicate content on blogger.
Ok..After you tweaks with any codes in an effort for blocking the duplicate content in your blogspot template.

if you want to know this code has been working or not, just type on google search :

?inurl:YOUR-SITE "searc/label"

for example:
?inurl: "search/label"
there's you can see that my url of search/label/ ,has been crawled by google ,caused the code not used on this blog ,

about the code below:

< link expr:href="data:blog.url" rel="canonical"/ >
<b:if cond='data:blog.pageType == "item"'>
<link expr:href='data:blog.url' rel='canonical'/>

I ever try use these codes above but not working? , I don't know why .

Then following code works well when I use it on my blogs.

<b:if cond='data:blog.pageType == "archive"'>
<meta content='noindex,follow' name='robots'/>
<b:if cond='data:blog.pageType == "index"'>
<b:if cond='data:blog.url != data:blog.homepageUrl'>
<meta content='noindex,follow' name='robots'/>

I don't use the code on this blog, coz I don't have any problem in duplicate contents, free-7.blogspot is clean, hehe , except I will taking part in seo contest again with th1s blog blogspot

Did you have problems or question with "URL restricted by robots.txt ?" , below i've got copy-paste a discusion from google suport webmasters forum .

- visya
"Google was unable to crawl the URL due to a robots.txt restriction: your robots.txt file might prohibit the Googlebot entirely; it might prohibit access to the directory in which this URL is located; or it might prohibit access to the URL specifically. Often, this is not an error"
can anybody tell me how can i cancel the robots.txt restriction?

-Top Contributor Webmaster Help
Hey visya,
i guess we are dealing with your blog on blogger*com (blogspot*com)? If so you don't need to do anything about robots.txt restrictions.,

This is your robots.txt:
User-agent: Mediapartners-Google

User-agent: *
Disallow: /search -- si only restricts duplicated stuff that not even you want to have indexed
keep cool, see all URLs restricted have /search/ in their path as specified in your robots.txt?

They are restricted because they duplicate what is already seen in posts and on the homepage, for example here: seo sadau
You recognize having seen all this before --> on your original posts which are not restricted and thus indexed ;-)

/search/ has to be restricted to avoid duplication in the index, and, as I said, you even want to restrict them for robots for your own sake. This restriction is done by default on blogspot*com (and domains that use blogger like yours, of course) and does not need to be resolved in any way, it's not an error (messages in webmastertools are diagnostic only here).

No comments:

Post a Comment

Please leave your comments or your promotion links, but don't add HTML links into the comment body, because I consider it as a spam, and will be delete..

Thank you for your visit..