Environment variable CI_REGISTRY_PASSWORD not available in container ENTRYPOINT for kubernetes runners
Summary
I've observed the behavior on our alpine based image which does the following as part of its "docker-entrypoint.sh":
docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY}
This step succeeds on our docker-machine based runners but fails on our k8s setup. While CI_REGISTRY_USER and CI_REGISTRY are properly resolved to their respective values on the k8s runners, CI_REGISTRY_PASSWORD is resolved to an empty string (instead of [MASKED]). This was tested by echoing the values of all 3 environment variables as part of the docker-entrypoint.sh script.
The value of CI_REGISTRY_PASSWORD can only be properly read/output as part of the ci script where its shown as [MASKED].
Steps to reproduce
Use a customized image to run the jobs as mentioned in the summary. Set FF_KUBERNETES_HONOR_ENTRYPOINT to true. Access environment variable CI_REGISTRY_PASSWORD in your docker ENTRYPOINT script.
Actual behavior
Environment variable $CI_REGISTRY_PASSWORD is empty in the entrypoint of a job run on a kubernetes runner.
Expected behavior
$CI_REGISTRY_PASSWORD contains its per-job value.
Relevant logs and/or screenshots
Output from ECS setup, the file /tmp/out1 gets written during execution of docker-entrypoint.sh and contains the docker login command mentioned in the summary.
Output from k8s based setup
Environment description
Kubernetes Executor
config.toml contents
[[runners]]
builds_dir = "/build"
environment = [
"FF_USE_ADVANCED_POD_SPEC_CONFIGURATION=true",
"FF_KUBERNETES_HONOR_ENTRYPOINT=true",
"REGISTRY_HTTP_HOST=http://a-gitlab-runner-image-pull-through-cache:5000",
"DOCKER_HOST=tcp://127.0.0.1:3001",
"TESTCONTAINERS_RYUK_DISABLED=true",
"DOCKER_TLS_CERTDIR=",
"DOCKER_TLS=",
"ARTIFACTORY_USER=gitlab-ci",
"A_REGISTRY_MIRROR=nexxiot-registry-oci.jfrog.io",
]
executor = "kubernetes"
[runners.cache]
Type = "s3"
Path = "gitlab-runner-ops-ci-amd"
Shared = false
[runners.cache.s3]
ServerAddress = "s3.amazonaws.com"
BucketName = "a-gitlab-runner-cache-cd-prd-a.eu-central-1"
BucketLocation = "eu-central-1"
Insecure = false
[runners.kubernetes]
image = "a-registry-oci.jfrog.io/alpine:3.18"
privileged = true
#running rootless buildah/podman proved unsuccessful so far
cap_add = ["SYS_ADMIN"]
#performance impact, might change
pull_policy = "always"
allowed_pull_policies = ["always","if-not-present"]
[runners.kubernetes.init_permissions_container_security_context]
[runners.kubernetes.helper_container_security_context]
[runners.kubernetes.build_container_security_context]
[runners.kubernetes.pod_security_context]
cpu_limit_overwrite_max_allowed = "true"
cpu_request_overwrite_max_allowed = "true"
cpu_limit = "32"
cpu_request = "8"
memory_limit_overwrite_max_allowed = "true"
memory_request_overwrite_max_allowed = "true"
memory_limit = "128Gi"
memory_request = "16Gi"
emphemeral_storage_limit_overwrite_max_allowed = "true"
emphemeral_storage_request_overwrite_max_allowed = "true"
emphemeral_storage_limit = "32Gi"
emphemeral_storage_request = "4Gi"
[[runners.kubernetes.services]]
name = "registry.gitlab.com/a-ag/sre/cd-k8s/a-docker:dind"
alias = "docker"
[[runners.kubernetes.pod_spec]]
name = "ephemeral-build-pvc"
patch = '''
containers:
- name: build
volumeMounts:
- name: build
mountPath: /build
- name: helper
volumeMounts:
- name: build
mountPath: /build
volumes:
- name: build
ephemeral:
volumeClaimTemplate:
spec:
storageClassName: "gp"
accessModes: [ ReadWriteOnce ]
resources:
requests:
storage: "512Gi"
'''
[[runners.kubernetes.pod_spec]]
name = "environment-secrets"
patch = '''
containers:
- name: build
envFrom:
- secretRef:
name: gitlab-runner-ops-ci-amd-sealed-environment
env:
- name: A_REGISTRY_MIRROR_USER
valueFrom:
secretKeyRef:
name: a-jfrog-io-v1
key: username
- name: A_REGISTRY_MIRROR_PASSWORD
valueFrom:
secretKeyRef:
name: a-jfrog-io-v1
key: password
'''
Used GitLab Runner version
Based on the helm chart v0.55 Running with gitlab-runner 16.2.0 (782e15da)
Possible fixes
No ideas so far...