Skip to content

Monitor Alert Webhook Endpoint fails with 500 if "severity" is unknown/unmapped

Summary

When integrating an external alert source with "Generic HTTP Endpoint" the endpoint responds with a 500 (application error) if the "severity" key in the json payload is unknown/invalid/unmapped.

Related to #267376 (closed).


Discovered during the review of https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/12238#note_484344233

Steps to reproduce

  • Send a payload to the alert integration with this format:
{"title": "My Alert", "severity": "something invalid or unknown"}

It will return a 500.

Example Project

https://gitlab.com/gitlab-examples/ops/incident-setup/everyone/tanuki-inc/

What is the current bug behavior?

The endpoint returns 500.

What is the expected correct behavior?

Several options are possible:

  • 1️⃣ Don't create alert and reject with 422 Bad request
  • 2️⃣ Create alert (200 OK) but map severity to unknown
  • 3️⃣ Create alert (200 OK) but map severity to critical (the current default when severity is missing)

Relevant logs and/or screenshots

Output of checks

Results of GitLab environment info

Expand for output related to GitLab environment info

(For installations with omnibus-gitlab package run and paste the output of:
`sudo gitlab-rake gitlab:env:info`)

(For installations from source run and paste the output of:
`sudo -u git -H bundle exec rake gitlab:env:info RAILS_ENV=production`)

Results of GitLab application Check

Expand for output related to the GitLab application check

(For installations with omnibus-gitlab package run and paste the output of: sudo gitlab-rake gitlab:check SANITIZE=true)

(For installations from source run and paste the output of: sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production SANITIZE=true)

(we will only investigate if the tests are passing)

Possible fixes

Implement option 1️⃣, 2️⃣ or 3️⃣

Edited by Peter Leitzen