after_script not working with self-hosted runners v15.9.1

Summary

In runners v15.9.1, it appears after_script is not working.

Steps to reproduce

We have a project using dotenv reports to save the `CI_JOB_ID` variable for use in a later job where `after_script` isn't working:
## Build anchors - arguments common to all build jobs
.common_build_args: &common_build_args
  artifacts:
    expose_as: null
  extends: .build candidate
  needs:
  - job: conftest
    artifacts: false
  - job: save docker artifacts
  tags: []

## Before script common to all build jobs
.common_build_before_script: &common_build_before_script
- !reference [.kaniko_config]
- export CANDIDATE_IMAGE="${CANDIDATE_IMAGE}-${CI_MERGE_REQUEST_IID:-${CI_COMMIT_SHORT_SHA}}-${CI_JOB_ID}"
- sed -i "s/UPSTREAM_VERSION/${CANDIDATE_VERSION}/g" "${BUILD_CONTEXT}"/Dockerfile
- sed -i "s/BC_FIPS_PROVIDER_VERSION/${BC_FIPS_PROVIDER_VERSION}/g" "${BUILD_CONTEXT}"/Dockerfile "${BUILD_CONTEXT}"/tools/cli/fips.cli
- sed -i "s/BC_FIPS_TLS_VERSION/${BC_FIPS_TLS_VERSION}/g" "${BUILD_CONTEXT}"/Dockerfile "${BUILD_CONTEXT}"/tools/cli/fips.cli

## Builds the new image for the test environment
build image test:
  <<: *common_build_args
  after_script:
  # Previously this was using "> ${build_id_file}" instead of "|tee test_job_id.env"
  # Changed to this to confirm if the after_script was being run, and confirmed no output,
  # thus, it is not an issue with "${build_id_file}", but rather the after_script section is not being run
  - echo "TEST_BUILD_ID=${CI_JOB_ID}" |tee "test_job_id.env"
  artifacts:
    paths:
    - image/
    - test_job_id.env
  before_script:
  - *common_build_before_script
  tags:
  - test-runners
  variables:
    BUILD_CONTEXT: "${CI_PROJECT_DIR}/docker"
    CANDIDATE_ARTIF: test_image.tar
    KUBERNETES_MEMORY_LIMIT: "8Gi"
    build_id_file: test_job_id.env
    ADDITIONAL_KANIKO_ARGS: |
      --build-arg CACHEBUST="$(date)"
      --build-arg ARM_CLIENT_ID=${DEV_ARM_CLIENT_ID}
      --build-arg ARM_CLIENT_SECRET=${DEV_ARM_CLIENT_SECRET}
      --build-arg ARM_TENANT_ID=${DEV_ARM_TENANT_ID}
      --build-arg KEYVAULT_VAULT_NAME=secret-vault
      --build-arg KEYVAULT_KEYSTORE_PASSWORD_NAME=keystore-secret
      --build-arg KEYVAULT_TRUSTSTORE_PASSWORD_NAME=truststore-secret

## The above jobs save a report and the build artifact together
## We don't need nor want the build artifacts downloaded during the later jobs if only the report
## is needed to make a variable, such as during the deployment itself, so we have to separate them
## The following jobs run after the build completes and save the artifacts and reports separately for use by
## only the jobs which need the specific artifact or report

## Retrieves the artifacts from the build jobs and exports only the build IDs as dotenv reports
save build id reports:
  artifacts:
    reports:
      dotenv: job.env
  image: ${CI_REGISTRY}/${CI_PROJECT_ROOT_NAMESPACE}/<internal-paths>/alpine-image/alpine:stable
  needs:
  - job: build image test
  - job: build image dev
  - job: build image stg
  - job: build image prod
  script:
  - |
    cat *_job_id.env > job.env
  stage: build

## Anchor with common arguments for all save build artifact jobs
.save build artifacts common:
  artifacts:
    paths:
    - image/
  image: ${CI_REGISTRY}/${CI_PROJECT_ROOT_NAMESPACE}/<internal-paths>/alpine-image/alpine:stable
  script:
  - ':'
  stage: build

save test build artifact:
  extends: .save build artifacts common
  needs:
  - job: build image test

Actual behavior

The job build image test uploads the artifacts from the image/ path, but fails to find test_job_id.env to upload as an artifact for save build id reports to download, and as a result, save build id reports fails, which blocks pipeline deployment.

Expected behavior

test_job_id.env is uploaded as an artifact for the save build id reports job to download and make use of

Relevant logs and/or screenshots

job log

image /details>

Environment description

config.toml contents
  config.toml: |
    ## Configure the maximum number of concurrent jobs
    ## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section
    ##
    concurrent = 10
    ## Defines in seconds how often to check GitLab for a new builds
    ## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section
    ##
    check_interval = 30
    ## Configure GitLab Runner's logging level. Available values are: debug, info, warn, error, fatal, panic
    ## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section
    ##
    log_level = "info"

    ## Configure GitLab Runner's logging format. Available values are: runner, text, json
    ## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section
    ##
    log_format = "json"

    ## Configure integrated Prometheus metrics exporter
    ## ref: https://docs.gitlab.com/runner/monitoring/#configuration-of-the-metrics-http-server
    listen_address = "0.0.0.0:9252"

  config.template.toml: |
    ## Conquest custom config
    [[runners]]
      [runners.kubernetes]
        ## We need to setup a storage account and use that as the cache for use by the `cache` parameter in jobs:
        ## https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runnerscache-section
        cpu_request = "1"
        helper_cpu_request = "20m"
        helper_memory_limit = "450Mi"
        helper_memory_request = "100Mi"
        image = "redacted"
        memory_limit = "1Gi"
        memory_limit_overwrite_max_allowed = "redacted"
        memory_request = "200Mi"
        privileged = redacted
        pull_policy = "always"
        service_cpu_request = "200m"
        service_memory_limit = "1150Mi"
        service_memory_request = "400Mi"
        [[runners.kubernetes.volumes.empty_dir]]
          mount_path = "/builds"
          name = "builds"
        ## This is NOT related to the cache mentioned above - do not modify when setting up storage account
        [[runners.kubernetes.volumes.empty_dir]]
          mount_path = "/cache"
          name = "cache"
        [[runners.kubernetes.volumes.empty_dir]]
          mount_path = "/home/gitlab-runner"
          name = "home"
        [[runners.kubernetes.volumes.empty_dir]]
          mount_path = "/kaniko/.docker"
          name = "kaniko"
        [runners.kubernetes.affinity]
          [runners.kubernetes.affinity.node_affinity]
            [runners.kubernetes.affinity.node_affinity.required_during_scheduling_ignored_during_execution]
              [[runners.kubernetes.affinity.node_affinity.required_during_scheduling_ignored_during_execution.node_selector_terms]]
                [[runners.kubernetes.affinity.node_affinity.required_during_scheduling_ignored_during_execution.node_selector_terms.match_expressions]]
                  key = "scope"
                  operator = "In"
                  values = ["infra"]
          [runners.kubernetes.affinity.pod_anti_affinity]
            [[runners.kubernetes.affinity.pod_anti_affinity.preferred_during_scheduling_ignored_during_execution]]
            weight = 100
            [runners.kubernetes.affinity.pod_anti_affinity.preferred_during_scheduling_ignored_during_execution.pod_affinity_term]
              topology_key = "kubernetes.io/hostname"
              [runners.kubernetes.affinity.pod_anti_affinity.preferred_during_scheduling_ignored_during_execution.pod_affinity_term.label_selector]
                [[runners.kubernetes.affinity.pod_anti_affinity.preferred_during_scheduling_ignored_during_execution.pod_affinity_term.label_selector.match_expressions]]
                  key = "app"
                  operator = "In"
                  values = ["gitlab-runner-worker"]
        [runners.kubernetes.node_tolerations]
          "CriticalInfra=true" = "NoSchedule"
        [runners.kubernetes.pod_labels]
          "app" = "gitlab-runner-worker"
        [runners.kubernetes.pod_security_context]
          fs_group = redacted
        [runners.kubernetes.build_container_security_context]
          run_as_group = redacted
          [runners.kubernetes.build_container_security_context.capabilities]
            add = ["redacted"]
            drop = ["ALL"]
        [runners.kubernetes.helper_container_security_context]
          run_as_group = redacted
          [runners.kubernetes.helper_container_security_context.capabilities]
            add = ["redacted"]
            drop = ["ALL"]
        [runners.kubernetes.service_container_security_context]
          run_as_group = redacted
          [runners.kubernetes.service_container_security_context.capabilities]
            add = ["redacted"]
            drop = ["ALL"]

Used GitLab Runner version

Possible fixes

Worked around by not using after_script as it turned out not to be a requirement for this use-case.

Edited by Thomas Spear