Add a bulk processor for elasticsearch incremental updates
What does this MR do?
Currently, we store bookkeeping information for the elasticsearch index in sidekiq jobs. There are four types of information:
- Backfill indexing for repositories
- Backfill indexing for database records
- Incremental indexing for repositories
- Incremental indexing for database records
The first three use elasticsearch bulk requests when indexing. The last does not.
This MR introduces a system that uses bulk requests when indexing incremental changes to database records. This is done by adding the bookkeeping information to a Redis ZSET, rather than enqueuing sidekiq jobs for each change. A Sidekiq cron worker takes batches from the ZSET and submits them to elasticsearch via the bulk API.
This reduces the responsiveness of indexing slightly, but also reduces the cost of indexing, both in terms of the load on Elasticsearch, and the size of the bookkeeping information.
Since we're using a
ZSET, we also get deduplication of work for free.
Does this MR meet the acceptance criteria?
- Changelog entry
- Documentation (if required)
- Code review guidelines
- Merge request performance guidelines
- Style guides
- Database guides
- Separation of EE specific content
Availability and Testing
- Review and add/update tests for this feature/bug. Consider all test levels. See the Test Planning Process.
- Tested in all supported browsers
- Informed Infrastructure department of a default or new setting change, if applicable per definition of done
If this MR contains changes to processing or storing of credentials or tokens, authorization and authentication methods and other items described in the security review guidelines:
Label as security and @ mention
- The MR includes necessary changes to maintain consistency between UI, API, email, or other methods
- Security reports checked/validated by a reviewer from the AppSec team
Closes #34086 (closed)