Group based Elasticsearch sharding
Elasticsearch servers can become very large, reducing performance and making them less maintainable. We should support sharding Elasticsearch by group similar to how we do with Gitaly, but on a group level instead of project level.
Elasticsearch has native sharding support which may allow this, or we may need to build something additional ourselves.
Elasticsearch servers index the contents of a number of groups but not necessarily the whole instance. Works similar to Gitaly, assign a server to every group (instead of a server for every project in Gitaly).
- The size of Elasticsearch can be limited ensuring much better performance and maintainability.
- When there is an access control problem that allows you to see more results the scope will be more limited, we can even give some organizations their own search server if needed.
- Because an outage is more limited in scope and we can quickly recover (most times reattach the network volume or in rare instances restore the index from object storage backup) we can do without redundancy in the form of a cluster with multiple instances, saving complexity and cost
- If you search across all of GitLab.com it hits every server with a (public) project. But these searches are rare compared with searches in groups and projects. In the beginning we can even have instance wide searches use the database like today so we don't need to aggregate them.
- There is a short downtime when a server goes down.
- We have not been able to index GitLab.com because it is too large.
- I realized this idea when someone proposed running Elasticsearch just for gitlab projects only for now and realizing that smaller ES installations are much easier to run.
Note: I think we should combine everything in a group (and not a project) so people can search across a group without hitting multiple servers.