# # robots.txt # # This file is to prevent the crawling and indexing of certain parts # of your site by web crawlers and spiders run by sites like Yahoo! # and Google. By telling these "robots" where not to go on your site, # you save bandwidth and server resources. # # This file will be ignored unless it is at the root of your host: # Used: http://example.com/robots.txt # Ignored: http://example.com/site/robots.txt # # For more information about the robots.txt standard, see: # http://www.robotstxt.org/robotstxt.html User-agent: * Crawl-delay: 10 Sitemap: https://www.archives.gov/sitemap.xml Sitemap: https://www.archives.gov/files/sitemap.xml Sitemap: https://www.archives.gov/research/native-americans/bia/photos/sitemap.xml Disallow: /citizen-archivist/history-hub/hh-test Disallow: /developer/artificial-intelligence-and-machine-learning-datasets Disallow: /developer/1940-census Disallow: /developer/national-archives-catalog-dataset User-agent: usasearch Crawl-delay: 2