What are Crawl Directives?

What are Crawl Directives?

The most common crawl directive is called ‘Robots Meta Directives,’ also known as ‘meta tags.’ These tags serve as guidelines for crawlers, so they know how to crawl or index your site.

Another directive is the robots.txt file; This is a file that serves the same purpose as meta tags. These guidelines are read by search engines and, depending on what you want them to do, they act accordingly.

However, the crawlers or bots may not always respond to instructions. Since they are not programmed to follow these rules strictly, it is possible that they sometimes ignore the code.

Depending on your needs, some of those instructions (as code form) can be:

  –  noindex: it tells the bots not to index a page.

  –  index: this is the default instruction, and it means that the crawler can index the page.

  –  dofollow: this tells the crawler to follow the page and its links even if it’s not indexed.

  –  nofollow: it tells the bot a link should not be considered when indexing the page where it is.

  –  noimageindex: it tells the crawler to not index any images on the site.

  –  none: it refers to ‘noindex and ‘nofollow’ simultaneously.

  –  noarchive: search engines should not show a cached link to this page on a SERP (search engine results page).

  –   nocache: is the same as noarchive but only works on Internet Explorer and Firefox.

  –  Unavailable after: it tells the crawlers that after a specific date, they shouldn’t index the website.