2021-04-20: Build job seems to be broken for Auto DevOps
Current Status
Due to a recent application change that updated a docker image version, multiple customers are reporting build job failures using AutoDevops. This is currently limited to users who have auto-devops enabled, and who are using their own Kubernetes runners.
As a workaround, create a .gitlab-ci.yml
with the following override:
build:
stage: build
image: "registry.gitlab.com/gitlab-org/cluster-integration/auto-build-image:v0.4.0"
services:
- docker:20.10.6-dind
We have reverted the application change that caused the issue, and plan to have a fix deployed by 2021-04-20 23:00UTC
Confidential data issue: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/4289 (internal)
Summary for CMOC notice / Exec summary:
- Customer Impact: Customers using Auto DevOps
- Customer Impact Duration: 2021-04-21 10:52 UTC - 23:36 UTC ( 12 hours, 44 minutes )
- Current state: IncidentResolved
- Root cause: gitlab-org/gitlab!59525 (merged)
Timeline
View recent production deployment and configuration events (internal only)
All times UTC.
2021-04-20
-
12:52
-auto-build-image
v0.6.0
lands in production gitlab-org/gitlab!59525 (merged) -
14:32
- @shemgyll declares incident in Slack. -
14:43
- @igorwwwwwwwwwwwwwwwwwwww suspects the bug was introduced by gitlab-org/gitlab!59525 (merged) - #4288 (comment 556104710) -
15:13
- @hfyngvason confirms the root cause is gitlab-org/gitlab!59525 (merged) - #4288 (comment 556146068). MR with the revert here gitlab-org/gitlab!59775 (merged) -
15:56
- gitlab-org/gitlab!59775 (merged) is merged and picked in the current auto-deploy branch. -
17:14
- Package with the revert MR was tagged https://dev.gitlab.org/gitlab/omnibus-gitlab/-/tags/13.11.202104201714+05b933325de.1cd49c0388c -
18:19
- Deployment to staging starts - https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/pipelines/566489 -
19:13
- Deployment to staging finishes - https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/pipelines/566489 -
19:21
- Deployment to canary starts - https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/pipelines/566545 -
20:07
- Deployment to canary finishes - https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/pipelines/566545 -
21:17
- Deployment to production starts - https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/pipelines/566656 -
23:36
- Deploymnet to production finishes - https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/pipelines/566656
Corrective Actions
Corrective actions should be put here as soon as an incident is mitigated, ensure that all corrective actions mentioned in the notes below are included.
- Controlled rollout/rollback of AutoDevops images: gitlab-org/gitlab#329962 (closed)
- MR to add Kubernetes runner smoke test: gitlab-org/cluster-integration/auto-build-image!58 (merged)
- Add a rate limited Auto DevOps pipeline metric (1 pipeline per project per unit of time) gitlab-org/gitlab#329114
- CI/CD Template BluePrint gitlab-org/gitlab!59833 (closed)
Note: In some cases we need to redact information from public view. We only do this in a limited number of documented cases. This might include the summary, timeline or any other bits of information, laid out in out handbook page. Any of this confidential data will be in a linked issue, only visible internally. By default, all information we can share, will be public, in accordance to our transparency value.
Click to expand or collapse the Incident Review section.
Incident Review
Summary
- Service(s) affected:
- Team attribution:
- Time to detection:
- Minutes downtime or degradation:
Metrics
Customer Impact
-
Who was impacted by this incident? (i.e. external customers, internal customers)
- ...
-
What was the customer experience during the incident? (i.e. preventing them from doing X, incorrect display of Y, ...)
- ...
-
How many customers were affected?
- ...
-
If a precise customer impact number is unknown, what is the estimated impact (number and ratio of failed requests, amount of traffic drop, ...)?
- ...
What were the root causes?
- ...
Incident Response Analysis
-
How was the incident detected?
- ...
-
How could detection time be improved?
- ...
-
How was the root cause diagnosed?
- ...
-
How could time to diagnosis be improved?
- ...
-
How did we reach the point where we knew how to mitigate the impact?
- ...
-
How could time to mitigation be improved?
- ...
-
What went well?
- ...
Post Incident Analysis
-
Did we have other events in the past with the same root cause?
- ...
-
Do we have existing backlog items that would've prevented or greatly reduced the impact of this incident?
- ...
-
Was this incident triggered by a change (deployment of code or change to infrastructure)? If yes, link the issue.
- ...
Lessons Learned
- ...
Guidelines
Resources
- If the Situation Zoom room was utilised, recording will be automatically uploaded to Incident room Google Drive folder (private)