Skip to content

Relax blobs complexity in favor of limiting data

What does this MR do and why?

Relax blobs complexity in favor of limiting data

In 18.5 we received a security issue describing how to exploit our graphql endpoint using large blobs. We fixed this by modifying the complexity of the blob fields which load data.

While this approach protected gitlab from the security exploit it hurt non-malicious requests by limiting requests to a single blob even if the blobs being requested were small.

This change reverts the complexity changes and adds a new mechanism to ensure the blobs being served are less than 20Mb.

This will allow more requests through while still protecting the gitlab servers.

Related to #576497

Changelog: fixed

References

GitLab 18.4.2 introduced GraphQL query complexi... (#576497)

Testing

  1. Generate a large blob (> 20Mb) and upload it to your GDK.
  2. In the graphql explorer request your blob on its own
{
  project(fullPath: "gitlab-org/gitlab-test") {
    id
    repository {
    	blobs(paths: ["bigfile.txt"], ref: "master") {  
        nodes {  
          rawTextBlob
          plainData
        }  
      }
    }
  }
}
  1. This should return data (if no error is returned that's a success). graphql-explorer may have trouble actually displaying this as it's quite large. I found that using a proper API client helps, I use RapidAPI but Postman should also work.
  2. In the graphql explorer request your blob with another blob
{
  project(fullPath: "gitlab-org/gitlab-test") {
    id
    repository {
    	blobs(paths: ["bigfile.txt", "another.md"], ref: "master") {  
        nodes {  
          rawTextBlob
          plainData
        }  
      }
    }
  }
}
  1. This should now return an error

Merge request reports

Loading