Skip to content

Add a bulk processor for elasticsearch incremental updates

What does this MR do?

Currently, we store bookkeeping information for the elasticsearch index in sidekiq jobs. There are four types of information:

  • Backfill indexing for repositories
  • Backfill indexing for database records
  • Incremental indexing for repositories
  • Incremental indexing for database records

The first three use elasticsearch bulk requests when indexing. The last does not.

This MR introduces a system that uses bulk requests when indexing incremental changes to database records. This is done by adding the bookkeeping information to a Redis ZSET, rather than enqueuing sidekiq jobs for each change. A Sidekiq cron worker takes batches from the ZSET and submits them to elasticsearch via the bulk API.

This reduces the responsiveness of indexing slightly, but also reduces the cost of indexing, both in terms of the load on Elasticsearch, and the size of the bookkeeping information.

Since we're using a ZSET, we also get deduplication of work for free.

Screenshots

Does this MR meet the acceptance criteria?

Conformity

Availability and Testing

Security

If this MR contains changes to processing or storing of credentials or tokens, authorization and authentication methods and other items described in the security review guidelines:

  • Label as security and @ mention @gitlab-com/gl-security/appsec
  • The MR includes necessary changes to maintain consistency between UI, API, email, or other methods
  • Security reports checked/validated by a reviewer from the AppSec team

Closes #34086 (closed)

Edited by Nick Thomas

Merge request reports