Skip to content

kafka: stop decompressing once all input is consumed

Summary

  • Stop decompressing once all input is consumed
  • Update comment of expected return value of ZSTD_decompressStream().

Background & Motivation

The fix of !8975 (merged) applied to length 0, but does not apply to length > 0.

Attached is a Kafka packet for dissection with length=1 that results in an infinite loop: kafka-zstd-length1.pcap

This packet was modified from this capture in discord. Frame 232668 was copied with editcap:

./build/run/editcap -r .h/ultimate_wireshark_protocols_pcap_220213.pcap .h/ultimate-kafka-zstd.pcap 232668

And the 0 length was modified to 1 by manual modification.

The proposed fix agrees with the streaming_decompression.c example from zstd: https://github.com/facebook/zstd/blob/a89e6b6812d630a749a691c24bcec58f8d0942b2/examples/streaming_decompression.c#L42-L47

The updated comment is copied from http://facebook.github.io/zstd/zstd_manual.html#Chapter9:

@return : 0 when a frame is completely decoded and fully flushed,
      or an error code, which can be tested using ZSTD_isError(),
      or any other value > 0, which means there is still some decoding or flushing to do to complete current frame :
                              the return value is a suggested next input size (just a hint for better latency)
                              that will never request more than the remaining frame size.
Edited by Kevin Albertson

Merge request reports