DoS in Markdown preview caused by a bug in TaskListFilter

⚠️ Please read the process on how to fix security issues before starting to work on the issue. Vulnerabilities must be fixed in a security mirror.

HackerOne report #3517928 by maksyche on 2026-01-20, imported by @greg:

Report | Attachments | How To Reproduce

HackerOne Analyst Summary

Summary of the issue

The researcher found a denial of service vulnerability in GitLab's TaskListFilter implementation where the yield_text_nodes_without_descending_into method processes siblings twice - once through stack.concat(it.children.reverse) and again through stack << it.next_sibling. When siblings are nested inside inline wrappers like <strong>, each sibling gets processed multiple times, leading to exponential CPU consumption during markdown rendering.

Steps to reproduce

  1. Create a GitLab instance (tested on AWS c5.2xlarge)
  2. Install GitLab CE and create a public project (e.g., root/dos)
  3. Upload the provided payload file (dos.md).
  4. Access the markdown preview at http://localhost:3000/root/dos/-/blob/main/dos.md?format=json&viewer=rich

Screenshot_2026-01-22_at_12.54.28_PM.png

  1. Execute repeated unauthenticated requests: while true; do curl -s -o /dev/null -w "%{time_total}s\n" "http://localhost:3000/root/dos/-/blob/main/dos.md?format=json&viewer=rich" & sleep 1; done

Screenshot_2026-01-22_at_12.59.08_PM.png

  1. Observe that individual requests take 20+ seconds and continuous requests render the instance unusable.

Screenshot_2026-01-22_at_1.00.27_PM.png

Impact statement

If exploited, this vulnerability will allow unauthenticated attackers to consume excessive CPU resources on GitLab instances through minimal HTTP requests. The attack will cause complete service disruption within seconds when targeting public projects, as each markdown preview request will trigger exponential processing loops. The vulnerability will affect all markdown rendering contexts including wikis, merge requests, and comments, making it possible to sustain denial of service with very low attack traffic (1-5% of expected instance capacity).

Original Report

Hi

Summary

Current implementation of yield_text_nodes_without_descending_into in task_list_filter.rb pushes siblings to the stack twice:

  • stack.concat(it.children.reverse) - all children at once.
  • stack << it.next_sibling - each child's next sibling .

When siblings are nested inside an inline wrapper (like <strong>), each sibling gets processed multiple times. An attacker can create a small publicly accessible.md file with a huge number of nested siblings in a task. A very small number of unauthenticated requests to preview this file cause CPU overload and the instance becomes completely unusable.

Steps to reproduce
  1. Create a 1k-users virtual machine (I used AWS c5.2xlarge).
  2. Install Gitlab CE (I used Ubuntu). During the installation, I used the http://gitlab.example.com EXTERNAL_URL without https and updated local /etc/hosts to link the IP address of the machine and this domain.
  3. Log into Gitlab and create a public root/dos project with just a README file.
  4. Upload the dos.md file. The payload is small enough to bypass timeouts of other filters, but big enough to cause DoS.
  5. Observe that when you try to preview the file, the request either fails or takes 20+ seconds to load.
  6. Since the project is public, you can query this file unauthenticated. Run this command from your machine to request the file every second (that's ~5% of the expected traffic for this instance type. I managed to make the instance unusable even with 1-2% of the traffic):
while true; do curl -s -o /dev/null -w "%{time_total}s\n" "http://gitlab.example.com/root/dos/-/blob/main/dos.md?format=json&viewer=rich" & sleep 1; done  
  1. Observe that Gitlab becomes completely unusable within seconds.
Impact

A small number of unauthenticated requests can make a Gitlab instance completely unusable within seconds.

What is the current bug behavior?

The server eats all the CPU resources within seconds even with less than 5% of traffic (I tested and even 1-2% of traffic can be enough).

What is the expected correct behavior?
  • Track visited nodes to prevent duplicate processing.
  • Add timeout to the filter.
Relevant logs and/or screenshots

Here's the video PoC:

Results of GitLab environment info
sudo gitlab-rake gitlab:env:info 

System information  
System:		Ubuntu 24.04  
Current User:	git  
Using RVM:	no  
Ruby Version:	3.2.8  
Gem Version:	3.7.1  
Bundler Version:2.7.1  
Rake Version:	13.0.6  
Redis Version:	7.2.11  
Sidekiq Version:7.3.9  
Go Version:	unknown

GitLab information  
Version:	18.8.1  
Revision:	d0311b03573  
Directory:	/opt/gitlab/embedded/service/gitlab-rails  
DB Adapter:	PostgreSQL  
DB Version:	16.11  
URL:		http://gitlab.example.com  
HTTP Clone URL:	http://gitlab.example.com/some-group/some-project.git  
SSH Clone URL:	git@gitlab.example.com:some-group/some-project.git  
Using LDAP:	no  
Using Omniauth:	yes  
Omniauth Providers: 

GitLab Shell  
Version:	14.45.5  
Repository storages:  
- default: 	unix:/var/opt/gitlab/gitaly/gitaly.socket  
GitLab Shell path:		/opt/gitlab/embedded/service/gitlab-shell

Gitaly  
- default Address: 	unix:/var/opt/gitlab/gitaly/gitaly.socket  
- default Version: 	18.8.1  
- default Git Version: 	2.52.GIT  

Impact

A small number of unauthenticated requests can make a Gitlab instance completely unusable within seconds.

Attachments

Warning: Attachments received through HackerOne, please exercise caution!

How To Reproduce

Please add reproducibility information to this section: