...
In the case of a private site, you may not want Google or any other search engine to crawl your wiki. Even in the case of a password protected site this will still happen and take up unnecessary bandwidth, processing and bump up the amount of memory consumed by Confluence.
Warning |
---|
More This content to go here.needs to be refined and should in fact link to a detailed article on robots.txt and other ways of preventing crawling. |
If you had been following the steps outline by the Bonsai Framework the Confluence configuration will be,
- Fronted by Apache
- The root of the website is located at /home/www.krypton.com/www
- The mapping from Apache to Confluence is /wiki/
As such, robots.txt will be placed in the root of the website and look like this,
Code Block |
---|
User-agent: *
Disallow: /wiki/
|