Skip to content
Snippets Groups Projects

catfile: Introduce request queues to allow batching reads

Merged Patrick Steinhardt requested to merge pks-catfile-queue into master
1 unresolved thread
3 files
+ 361
50
Compare changes
  • Side-by-side
  • Inline
Files
3
  • aa318bf0
    The object reader will, for each requested object, write the revision
    into the catfile process and then wait for the object's header to be
    written to the process's standard output. This pattern is quite
    inefficient given that we're forced to wait for the roundtrip to
    complete, where a lot of callsites could instead easily split up writing
    the request and reading the object. Furthermore, we cannot use buffered
    I/O and are forced to create a separate tracing spans per requested
    object.
    
    Prepare to fix these shortcomings by reworking the object reader's
    internals to use an object reader queue. This allows us to split up
    request and read of the object, opens up a way to use buffered I/O and
    furthermore allows us to use a single tracing span across the lifetime
    of the queue.
    
    No change in behaviour is expected given that the queue is not yet used
    by anything except as implementation detail of the `Object()` function.
@@ -218,7 +218,7 @@ func TestCache_ObjectReader(t *testing.T) {
// We're cheating a bit here to avoid creating a racy test by reaching into the
// process and trying to read from its stdout. If the cancel did kill the process as
// expected, then the stdout should be closed and we'll get an EOF.
output, err := io.ReadAll(objectReaderImpl.stdout)
output, err := io.ReadAll(objectReaderImpl.queue.stdout)
if err != nil {
require.True(t, errors.Is(err, os.ErrClosed))
} else {
Loading