The crawler obeys the Robot Exclusion Standard. Specifically, he crawler adheres to the 1996 Robots Exclusion Standard (RES).

The crawler obeys the first entry in the robots.txt file.

Disallowed documents, including slash “/” (the home page of the site), are not crawled, nor are links in those documents followed. The crawler does read the home page at each site and uses it internally, but if it is disallowed, it is neither indexed nor followed. If a page has robots.txtstandards disallowing it to be crawled, the crawler will not read or use the contents of that page.

Example robots.txt:

User-agent: *
Disallow: /cgi-bin/

Directives are Case Sensitive
Robots directives for Disallow/Allow are case sensitive. Use the correct capitalization to match your website:

Example of capitalization:

User-agent: *
Disallow: /private
Disallow: /Private
Disallow: /PRIVATE

Additional Symbols
Additional symbols allowed in the robots.txt directives include:

‘*’ – matches a sequence of characters
‘$’ – anchors at the end of the URL string

Using Wildcard Match: ‘[b]*’[/b] A ‘*’ in robots directives is used to wildcard match a sequence of characters in your URL. You can use this symbol in any part of the URL string that you provide in the robots directive.

Example of ‘**’:

User-agent: *
Allow: /public*/
Disallow: /*_print*.html
Disallow: /*?sessionid

The robots directives above:

  1. Allow all directories that begin with “public” to be crawled.
    Example: /public_html/ or /public_graphs/
  2. Disallow files or directories which contain “_print” to be crawled.
    Example: /card_print.html or /store_print/product.html
  3. Disallow files with “?sessionid” in their URL string to be crawled.
    Example: /cart.php?sessionid=342bca31

Note: A trailing ‘*’ is not needed since that is the matching behavior for the crawler.

In the example below, both ‘Disallow’ directives are equivalent:

User-agent: *
Disallow: /private*
Disallow: /private

Using ‘$’
A ‘$’ in robots directives is used to anchor the match to the end of the URL string. Without this symbol, the crawler would match all URLs against the directives, treating the directives as a prefix.

Example of ‘$’:

User-agent: *
Disallow: /*.gif$
Allow: /*?$

The robots directives above:

  1. Disallow all files ending in ‘.gif’ in your entire site.
    Note: Omitting the ‘$’ would disallow all files containing ‘.gif’ in their file path.
  2. Allow all files ending in ‘?’ to be included. This would not allow files that just contain ‘?’ somewhere in the URL string.

Note: The ‘$’ symbol only makes sense at the end of the string. Hence, when the crawler encounters a ‘$’ symbol, it assumes the directive terminates there and any characters after that symbol are ignored.

Using Allow:
The ‘Allow’ tag is supported as shown in the examples above.

Post By Editor (2,827 Posts)

Website: →