CI_JOB_STATUS=running in after_script when FF_USE_LEGACY_KUBERNETES_EXECUTION_STRATEGY: "false"
Summary
We are using gitlab SaaS server and 0.13.9 gitlab-runner version. When we set FF_USE_LEGACY_KUBERNETES_EXECUTION_STRATEGY: "false" to use attach strategy instead of exec, CI_JOB_STATUS=running in after_script nevertheless which job status: success or failure. If we remove this flag, everything works as expected.
Steps to reproduce
Use following pipeline to reproduce:
.gitlab-ci.yml
stages:
- prepare
variables:
FF_USE_LEGACY_KUBERNETES_EXECUTION_STRATEGY: "false"
dummy_job:
stage: prepare
script:
- echo "Hello World"
after_script:
- echo ${CI_JOB_STATUS}
dummy1_job:
stage: prepare
script:
- exit 1
after_script:
- echo ${CI_JOB_STATUS}
Actual behavior
Executing "step_script" stage of the job script
00:00
$ exit 1
Running after_script
00:01
Running after script...
$ echo ${CI_JOB_STATUS}
running
Cleaning up file based variables
00:00
ERROR: Job failed: command terminated with exit code 1
Executing "step_script" stage of the job script
00:01
$ echo "Hello World"
Hello World
Running after_script
00:00
Running after script...
$ echo ${CI_JOB_STATUS}
running
Cleaning up file based variables
00:01
Job succeeded
Expected behavior
Executing "step_script" stage of the job script
00:00
$ exit 1
Running after_script
00:01
Running after script...
$ echo ${CI_JOB_STATUS}
failed
Cleaning up file based variables
00:00
ERROR: Job failed: command terminated with exit code 1
Executing "step_script" stage of the job script
00:01
$ echo "Hello World"
Hello World
Running after_script
00:00
Running after script...
$ echo ${CI_JOB_STATUS}
success
Cleaning up file based variables
00:01
Job succeeded
Relevant logs and/or screenshots
job log
Executing "step_script" stage of the job script
00:00
$ exit 1
Running after_script
00:01
Running after script...
$ echo ${CI_JOB_STATUS}
running
Cleaning up file based variables
00:00
ERROR: Job failed: command terminated with exit code 1
Executing "step_script" stage of the job script
00:01
$ echo "Hello World"
Hello World
Running after_script
00:00
Running after script...
$ echo ${CI_JOB_STATUS}
running
Cleaning up file based variables
00:01
Job succeeded
Environment description
We are using gitlab.com SaaS solution with custom gitlab-runner v0.13.9 running on GKE 1.18 cluster with preemptible nodes. Following flag set FF_USE_LEGACY_KUBERNETES_EXECUTION_STRATEGY: "false"
.
config.toml contents
[[runners]]
output_limit = 16384
environment = ["FF_GITLAB_REGISTRY_HELPER_IMAGE=1"]
pre_clone_script = "echo '172.65.251.78 gitlab.com' >> /etc/hosts"
[runners.kubernetes]
image = "ubuntu:18.04"
privileged = true
service_account = "gitlab-runner-admin"
poll_timeout = 1200
cpu_limit = "2"
cpu_limit_overwrite_max_allowed = "4"
memory_limit = "2Gi"
memory_limit_overwrite_max_allowed = "8Gi"
cpu_request = "700m"
cpu_request_overwrite_max_allowed = "3"
memory_request = "1Gi"
memory_request_overwrite_max_allowed = "5Gi"
service_cpu_limit = "3"
service_cpu_limit_overwrite_max_allowed = "4"
service_memory_limit = "3Gi"
service_memory_limit_overwrite_max_allowed = "8Gi"
service_cpu_request = "700m"
service_cpu_request_overwrite_max_allowed = "3"
service_memory_request = "2Gi"
service_memory_request_overwrite_max_allowed = "4Gi"
helper_cpu_limit = "1"
helper_cpu_limit_overwrite_max_allowed = "2"
helper_memory_limit = "256Mi"
helper_memory_limit_overwrite_max_allowed = "2Gi"
helper_cpu_request = "100m"
helper_cpu_request_overwrite_max_allowed = "1"
helper_memory_request = "128Mi"
helper_memory_request_overwrite_max_allowed = "1Gi"
[runners.kubernetes.node_selector]
node_pool = "gitlab-main"
[runners.kubernetes.pod_annotations]
"cluster-autoscaler.kubernetes.io/safe-to-evict" = "false"
[[runners.kubernetes.dns_config.options]]
name = "single-request-reopen"
[[runners.kubernetes.dns_config.options]]
name = "ndots"
value = "2"
Used GitLab Runner version
v0.13.9
Possible fixes
TBD