Gitlab GraphQL API search returns 'Internal server error' when searching for Group Vulnerabilities filtered by multiple scanners
Summary
My company has a script that, for a group, returns information about all Critical/High severity, Detected/Confirmed vulnerabilities returned from the Gemnasium, Semgrep, and OWASP ZAP scanners. This is part of our security effort to switch to Gitlab security scanning, and these settings were identified to allow us to match existing scanning functionality. We are not able to use the filtering of the security dashboard because scanners are not filterable by name, only be report type, and multiple other scanners are also being run.
This script is erroring with a 'Internal server error' response when hitting the graphql API. The script does sometimes work with the same settings on the same group, so it is possible this is a complexity problem that does not exist on smaller projects running smaller numbers of scans with less findings. Reducing result page size from 100 (default) to 10 does not fix the issue.
There is a temporary workaround of only searching for 1 scanner at a time, so this may be related to a complexity issue with searching for vulnerabilities filtering by more than 1 scanner. This was discovered the morning of August 4th CST, so there may be an update that changed something the night before.
This was originally discovered when run on larger projects of a private group hosted on gitlab.com. Multiple other scanners are running in the environment, so the three searched for are a subset.
Steps to reproduce
- Add a group_fullpath and gitlab_token value in the provided script.
- Run the script, which hits the gitlab.com/api/graphql API with a request for multiple scanners.
What is the current bug behavior?
An 'Internal server error' returns instead of the API response.

What is the expected correct behavior?
In this example script, the JSON response should print out the resulting information using the python print() command.
Relevant logs and/or screenshots
Output:
Traceback (most recent call last):
File "/home/vagrant/IdeaProjects/vulnerability-management-scripts/test-report.py", line 80, in <module>
resultJson = client.execute(query)
File "/usr/local/lib/python3.9/dist-packages/gql/client.py", line 372, in execute
data = loop.run_until_complete(
File "/usr/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
return future.result()
File "/usr/local/lib/python3.9/dist-packages/gql/client.py", line 269, in execute_async
return await session.execute(
File "/usr/local/lib/python3.9/dist-packages/gql/client.py", line 1171, in execute
raise TransportQueryError(
gql.transport.exceptions.TransportQueryError: {'message': 'Internal server error'}
Output of checks
This bug happens on GitLab.com
Results of GitLab environment info
Expand for output related to GitLab environment info
(For installations with omnibus-gitlab package run and paste the output of: `sudo gitlab-rake gitlab:env:info`) (For installations from source run and paste the output of: `sudo -u git -H bundle exec rake gitlab:env:info RAILS_ENV=production`)
Results of GitLab application Check
Expand for output related to the GitLab application check
(For installations with omnibus-gitlab package run and paste the output of:
sudo gitlab-rake gitlab:check SANITIZE=true)(For installations from source run and paste the output of:
sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production SANITIZE=true)(we will only investigate if the tests are passing)
Possible fixes
-
Filter off inclauses that don't belong to the primary queried table when determining columns to be rewritten for the Cartesian product optimisation. (This will mean in clauses for joins won't be optimised)