August 12, 2010

Robot Exclusion Standard- Robots.txt File Standards

The Robot Exclusion Standard, also known as the Robots Exclusion Protocol or robots.txt protocol, is a convention to prevent cooperating web spiders and other web robots from accessing all or part of a website which is otherwise publicly viewable. Robots are often used by search engines to categorize and archive web sites, or by webmasters to proofread source code. The standard is unrelated to, but can be used in conjunction with, sitemaps, a robot inclusion standard for websites.

If a site owner wishes to give instructions to web robots he must place a text file called robots.txt to the root of the web site hierarchy (e.g. www.example.com/robots.txt). This text file should contain the instructions in a specific format (see examples bellow). Robots that wish to follow the instructions try to fetch this file and read the instructions before fetching any other file from the web site. If this file doesn't exist web robots assume that the web owner wishes to provide no specific instructions.

A robots.txt file on a website will function as a request that specified robots ignore specified files or directories in their search. This might be, for example, out of a preference for privacy from search engine results, or the belief that the content of the selected directories might be misleading or irrelevant to the categorization of the site as a whole, or out of a desire that an application only operate on certain data.

For websites with multiple subdomains, each subdomain must have its own robots.txt file. If example.com had a robots.txt file but a.example.com did not, the rules that would apply for example.com would not apply to a.example.com.

Examples
This example allows all robots to visit all files because the wildcard "*" specifies all robots:


User-agent: *
Disallow:
This example keeps all robots out:

User-agent: *
Disallow: /
The next is an example that tells all crawlers not to enter four directories of a website:

User-agent: *
Disallow: /cgi-bin/
Disallow: /images/
Disallow: /tmp/
Disallow: /private/
Example that tells a specific crawler not to enter one specific directory:

User-agent: BadBot # replace the 'BadBot' with the actual user-agent of the bot
Disallow: /private/
Example that tells all crawlers not to enter one specific file:

User-agent: *
Disallow: /directory/file.html
Note that all other files in the specified directory will be processed.

Example demonstrating how comments can be used:
# Comments appear after the "#" symbol at the start of a line, or after a directive

User-agent: * # match all bots
Disallow: / # keep them out

About the Author

Tomboy

Author & Editor

Has laoreet percipitur ad. Vide interesset in mei, no his legimus verterem. Et nostrum imperdiet appellantur usu, mnesarchum referrentur id vim.

Post a Comment

 
Iwebslog Blog © 2015 - Designed by Templateism.com