# robots.txt handling

robots.txt is fetched for each (sub)domain before actually crawling the content.

GUS honors the following User-agents:
* indexer
* *

## robots.txt caching

Every fetched robots.txt is cached only for the current crawl.