🚨 [security] [ruby] Update rack 3.1.16 → 3.1.17 (patch)
This dependency update fixes known security vulnerabilities. Please see the details below and assess their impact carefully. We recommend to merge and deploy this as soon as possible!
Here is everything you need to know about this update. Please take a good look at what changed and the test results before merging this pull request.
What changed?
↗️ rack (indirect, 3.1.16 → 3.1.17) · Repo · Changelog
Security Advisories 🚨
🚨 Rack's unbounded multipart preamble buffering enables DoS (memory exhaustion)
Summary
Rack::Multipart::Parserbuffers the entire multipart preamble (bytes before the first boundary) in memory without any size limit. A client can send a large preamble followed by a valid boundary, causing significant memory use and potential process termination due to out-of-memory (OOM) conditions.Details
While searching for the first boundary, the parser appends incoming data into a shared buffer (
@sbuf.concat(content)) and scans for the boundary pattern:@sbuf.scan_until(@body_regex)If the boundary is not yet found, the parser continues buffering data indefinitely. There is no trimming or size cap on the preamble, allowing attackers to send arbitrary amounts of data before the first boundary.
Impact
Remote attackers can trigger large transient memory spikes by including a long preamble in multipart/form-data requests. The impact scales with allowed request sizes and concurrency, potentially causing worker crashes or severe slowdown due to garbage collection.
Mitigation
- Upgrade: Use a patched version of Rack that enforces a preamble size limit (e.g., 16 KiB) or discards preamble data entirely per RFC 2046 § 5.1.1.
- Workarounds:
- Limit total request body size at the proxy or web server level.
- Monitor memory and set per-process limits to prevent OOM conditions.
🚨 Rack: Multipart parser buffers large non‑file fields entirely in memory, enabling DoS (memory exhaustion)
Summary
Rack::Multipart::Parserstores non-file form fields (parts without afilename) entirely in memory as RubyStringobjects. A single large text field in a multipart/form-data request (hundreds of megabytes or more) can consume equivalent process memory, potentially leading to out-of-memory (OOM) conditions and denial of service (DoS).Details
During multipart parsing, file parts are streamed to temporary files, but non-file parts are buffered into memory:
body = String.new # non-file → in-RAM buffer @mime_parts[mime_index].body << contentThere is no size limit on these in-memory buffers. As a result, any large text field—while technically valid—will be loaded fully into process memory before being added to
params.Impact
Attackers can send large non-file fields to trigger excessive memory usage. Impact scales with request size and concurrency, potentially leading to worker crashes or severe garbage-collection overhead. All Rack applications processing multipart form submissions are affected.
Mitigation
- Upgrade: Use a patched version of Rack that enforces a reasonable size cap for non-file fields (e.g., 2 MiB).
- Workarounds:
- Restrict maximum request body size at the web-server or proxy layer (e.g., Nginx
client_max_body_size).- Validate and reject unusually large form fields at the application level.
🚨 Rack's multipart parser buffers unbounded per-part headers, enabling DoS (memory exhaustion)
Summary
Rack::Multipart::Parsercan accumulate unbounded data when a multipart part’s header block never terminates with the required blank line (CRLFCRLF). The parser keeps appending incoming bytes to memory without a size cap, allowing a remote attacker to exhaust memory and cause a denial of service (DoS).Details
While reading multipart headers, the parser waits for
CRLFCRLFusing:@sbuf.scan_until(/(.*?\r\n)\r\n/m)If the terminator never appears, it continues appending data (
@sbuf.concat(content)) indefinitely. There is no limit on accumulated header bytes, so a single malformed part can consume memory proportional to the request body size.Impact
Attackers can send incomplete multipart headers to trigger high memory use, leading to process termination (OOM) or severe slowdown. The effect scales with request size limits and concurrency. All applications handling multipart uploads may be affected.
Mitigation
- Upgrade to a patched Rack version that caps per-part header size (e.g., 64 KiB).
- Until then, restrict maximum request sizes at the proxy or web server layer (e.g., Nginx
client_max_body_size).
Release Notes
3.1.17 (from changelog)
Security
- CVE-2025-61772 Multipart parser buffers unbounded per-part headers, enabling DoS (memory exhaustion)
- CVE-2025-61771 Multipart parser buffers large non‑file fields entirely in memory, enabling DoS (memory exhaustion)
- CVE-2025-61770 Unbounded multipart preamble buffering enables DoS (memory exhaustion)
Does any of this look wrong? Please let us know.
Commits
See the full diff on Github. The new version differs by 4 commits:
👉 No CI detected
You don't seem to have any Continuous Integration service set up!
Without a service that will test the Depfu branches and pull requests, we can't inform you if incoming updates actually work with your app. We think that this degrades the service we're trying to provide down to a point where it is more or less meaningless.
This is fine if you just want to give Depfu a quick try. If you want to really let Depfu help you keep your app up-to-date, we recommend setting up a CI system:
* Circle CI, Semaphore and Github Actions are all excellent options. * If you use something like Jenkins, make sure that you're using the Github integration correctly so that it reports status data back to Github. * If you have already set up a CI for this repository, you might need to check your configuration. Make sure it will run on all new branches. If you don’t want it to run on every branch, you can whitelist branches starting with `depfu/`.Depfu will automatically keep this PR conflict-free, as long as you don't add any commits to this branch yourself. You can also trigger a rebase manually by commenting with @depfu rebase.
All Depfu comment commands
- @depfu rebase
- Rebases against your default branch and redoes this update
- @depfu recreate
- Recreates this PR, overwriting any edits that you've made to it
- @depfu merge
- Merges this PR once your tests are passing and conflicts are resolved
- @depfu cancel merge
- Cancels automatic merging of this PR
- @depfu close
- Closes this PR and deletes the branch
- @depfu reopen
- Restores the branch and reopens this PR (if it's closed)
- @depfu pause
- Ignores all future updates for this dependency and closes this PR
- @depfu pause [minor|major]
- Ignores all future minor/major updates for this dependency and closes this PR
- @depfu resume
- Future versions of this dependency will create PRs again (leaves this PR as is)