Staging migrations take over two hours to complete
Summary
Staging migrations take over two hours to complete
The deployment https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/pipelines/225249 deploys the changes listed at https://gitlab.com/gitlab-org/security/gitlab/compare/160570ab8ca...caa77d48bc0. These changes include a migration that adds the index index_ci_job_artifacts_id_for_terraform_reports
, which was added in merge request gitlab-org/gitlab!37498 (merged). In the merge request it is mentioned it took about 30 minutes to run using our database lab. But according to https://gitlab.slack.com/archives/C101F3796/p1597241407197400?thread_ts=1597240293.195800&cid=C101F3796 it has so far taken over 1 hour and 40 minutes:
01:37:08.64127 | CREATE INDEX CONCURRENTLY "index_ci_job_artifacts_id_for_terraform_reports" ON "ci_job_artifacts"
About 126 minutes into the job I cancelled it. We need to revert these changes and find a way to run them on staging without taking as much time. Right now the risk of it taking very long on production is simply too great.
Timeline
All times UTC.
2020-08-12
- 14:42 - yorickpeterse declares incident in Slack using
/incident declare
command.
Click to expand or collapse the Incident Review section.
Incident Review
Summary
- Service(s) affected:
- Team attribution:
- Minutes downtime or degradation:
Metrics
Customer Impact
- Who was impacted by this incident? (i.e. external customers, internal customers)
- What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- How many customers were affected?
- If a precise customer impact number is unknown, what is the estimated potential impact?
Incident Response Analysis
- How was the event detected?
- How could detection time be improved?
- How did we reach the point where we knew how to mitigate the impact?
- How could time to mitigation be improved?
Post Incident Analysis
- How was the root cause diagnosed?
- How could time to diagnosis be improved?
- Do we have an existing backlog item that would've prevented or greatly reduced the impact of this incident?
- Was this incident triggered by a change (deployment of code or change to infrastructure. If yes, have you linked the issue which represents the change?)?