Skip to content
GitLab
    • GitLab: the DevOps platform
    • Explore GitLab
    • Install GitLab
    • How GitLab compares
    • Get started
    • GitLab docs
    • GitLab Learn
  • Pricing
  • Talk to an expert
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
    • Switch to GitLab Next
    Projects Groups Topics Snippets
  • Register
  • Sign in
  • wireshark wireshark
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributor statistics
    • Graph
    • Compare revisions
    • Locked files
  • Issues 1.4k
    • Issues 1.4k
    • List
    • Boards
    • Service Desk
    • Milestones
    • Iterations
    • Requirements
  • Merge requests 179
    • Merge requests 179
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Artifacts
    • Schedules
    • Test cases
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Model experiments
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Code review
    • Insights
    • Issue
    • Repository
  • Wiki
    • Wiki
  • External wiki
    • External wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Wireshark FoundationWireshark Foundation
  • wiresharkwireshark
  • Issues
  • #17811
Closed
Open
Issue created Dec 28, 2021 by Sharon Brizinov@sean007Contributor

KAFKA dissector excessive memory and CPU consumption - denial of service

Summary

It is possible to reach an (almost) infinite loop (max guint64) in the KAFKA dissector by generating a specifically crafted KAFKA packet such that will create unlimited number of tagged fields by manipulating the reading pointer to the same place each time. The packet will consume an excessive amount of memory and 100% core cpu, which eventually lead to a denial of service via packet injection or crafted capture file.

I believe that the bug was introduced to the code while trying to fix another issue in the Kafka dissector. I tried to go back to the relevant commit/pr, and I think this is the one - !890 (merged)

Technical details

Kafka is a binary TCP-based protocol over TCP port 9092. It can transfer tagged items which are encoded as varint with length prefix and data. We were able to craft a huge (max guint64) array of tagged items such each "item" has a length and value of zero bytes - this led tshark to never advance the pointer which kept on reading the next varint from the same place without moving forward. Therefore, even a very small packet could cause tshark to be stuck in a loop while using excessive amount of memory cpu.

The relevant functions are:

dissect_kafka
dissect_kafka_tagged_fields - here we will read a **count** which is a varint encoded big number.
dissect_kafka_array_elements - this function will loop **count** times and call **dissect_kafka_tagged_field** each time.
dissect_kafka_tagged_field - will try to read varint size and field data.

tvb_get_varint tries to read a varint with max of 10 bytes and return the amount of bytes it read, if it fails to read a valid varint, it will return 0. Therefore, we can encode a large count with big varint, and then encode 10 bytes of > 0x80 so all the tagged items reads will not change the offset and the loop will keep on reading from the same place over and over again.

packet poc

Steps to reproduce

Open the provided pcaps with Wireshark. claroty_poc_kafka_inf_tagged_fields_1_packet.pcap

kafka_inf_tagged_fields_10000_packets.pcap

kafka_inf_tagged_fields_100_packets.pcap

What is the current bug behavior?

Wireshark/tshark will try to parse up to max guint64 Kafka tagged items, while reading from the same place each time - the packet is manipulated in such way that the offset is never being advanced.

What is the expected correct behavior?

Once wirehshark/tshark encounter a malformed varint, it should stop processing the current packet, or at least advance the reading pointer so no inf loops will occur.

Sample capture file

claroty_poc_kafka_inf_tagged_fields_1_packet.pcap

kafka_inf_tagged_fields_10000_packets.pcap

kafka_inf_tagged_fields_100_packets.pcap

Relevant logs and/or screenshots

poc

Build information

Version 3.6.0 (v3.6.0-0-g3a34e44d02c9) 
Edited Dec 28, 2021 by Sharon Brizinov
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Assignee
Assign to
Time tracking