Skip to content

Add concurrency to S3 bulk deletion

Context

This change is part of !24 (merged) and related to #10 (closed).

This change enhances the work done on !31 (merged) by leveraging concurrency.

Rational

In !31 (merged) we made it possible to send a single delete request to S3 per 1000 files to delete.

This MR proposes sending multiple delete requests concurrently instead of sequentially, wrapping them in goroutines. This should translate into significant performance improvements for large data sets with thousands of files being deleted during garbage collection.

Edited by João Pereira

Merge request reports