Method for configuring robots.txt / disabling search engine indexing of the gitlab instance
Summary
Currently, if the GL instance deployed with the chart is accessible for the Google bot, it will always allow the bot to index the public projects in the instance. I would appreciate it if there were a supported method for configuring the chart such that Google (and other crawlers) would not crawl the website.
There is a robots.txt file in the gitlab-org/gitlab project, but that file is not yet configurable through the chart. Alternate methods would be adding X-Robots-Tag
headers, or including a <meta name="robots" content="noindex" />
in each page header.
Edited by Matthias van de Meent