Monitoring alert degenerate case on initial deploy
With an app with a single deploy to production, and a merge request with zero changes, it thinks the MR is deployed to production, which is technically true, although misleading. But what's really weird is that it thinks the MR is responsible for driving memory usage from some non-zero number to a larger number. I can't even guess how this is being calculated. Presumably memory usage was zero before the deploy. Is there some round-off causing datapoints to be counted part-way into the deploy instead of truly before/after?