Did you know that if Google cannot crawl your robots.txt file, that it will stop crawling your whole site?
This
doesn't mean you need to have a robots.txt file, you can simply not
have one. But if you do have one and Google knows you do and it cannot
access it, then Google will stop crawling your site.
Google's Eric Kuan said this in a Google Webmaster Help thread. He wrote:
If
Google is having trouble crawling your robots.txt file, it will stop
crawling the rest of your site to prevent it from crawling pages that
have been blocked by the robots.txt file. If this isn't happening
frequently, then it's probably a one off issue you won't need to worry
about. If it's happening frequently or if you're worried, you should
consider contacting your hosting or service provider to see if they
encountered any issues on the date that you saw the crawl error.
This also doesn't mean you can't block your robots.txt from showing up in the search results, you can. But be careful with that.
In short, if your robots.txt file doesn't return either a 200 or 404 response code, then you got an issue.
No comments:
Post a Comment