Running on runner-db5fe54a-project-114-concurrent-0 via clerico2...Fetching changes...HEAD is now at 0baad61 lalafatal: unable to access 'https://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx@gitlab.siu.edu.ar/lmartini/unidad-venta.git/': SSL certificate problem: unable to get issuer certificateERROR: Job failed: exit code 1
The workaround i found was disable the SSL verify in the config.toml, but feels dirty
environment = ["GIT_SSL_NO_VERIFY=true"]
Was there any change with the SSL or CA inside the docker-runner?
First of all, thank you for raising an issue to help improve the GitLab Runner product. We're sorry about this, but this particular issue has gone unnoticed for quite some time. To establish order in the GitLab-Runner Issue Tracker, we must ensure that every issue is correctly labelled and triaged, to get the proper attention.
We are automatically labelling this issue for attention as it meets the following criteria:
No activity in the past 3 months
Unlabelled
We'd like to ask you to help us out and determine whether this issue should be stay open or be closed.
If this issue is reporting a bug, please can you attempt to reproduce on the latest version of the runner, to help us to understand whether the bug still needs our attention.
If this issue is proposing a new feature, please can you verify whether the feature proposal is still relevant.
Same issue, but bare in mind that I do use a legit certificate from Let's Encrypt, so I don't need to specify that environment variable. Also, it happens not only for docker runners, but also for shell runners.
Please note that i tried cloning manually with the gitlab-ci-token and the project as well as the runner token - both failed, unless I change project permissions to public. The error in this case is:
Cloning into 'project'... Password for 'https://gitlab-ci-token@example.com': remote: HTTP Basic: Access denied fatal: Authentication failed for 'https://gitlab-ci-token@example.com/group/project/'
Also, please note that similar issue has been raised in GitLab community project: #39469
Same issue here. Could we get some attention for this, please?
In my case, I used the official Helm chart and followed the instructions for custom TLS certs. It definitely supplies the cert to the gitlab-runner pod, as it successfully registers - and then git clone fails during builds with fatal: unable to access 'https://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx@REDACTED.com/infrastructure/prometheus-kubernetes.git/': SSL certificate problem: unable to get local issuer certificate.
It appears as if the certificate isn't being provided to the git executable in the pods created by gitlab-runner.
Tested with the build images lwolf/helm-kubectl-docker:v152_213, busybox and docker:dind with the same results.
Issue is still present in gitlab 10.3.3 and gitlab-runner 10.3.0. The only workaround to this problem is to switch off SSL verification on the build machine (please note that I'm using production Let's Encrypt Certificates even in my internal network):
Open /var/lib/gitlab-runner/.gitconfig (or whatever the home directory for your gitlab-runner user is)
Add this:
[http] sslVerify = false
Open /etc/gitlab-runner/confgi.toml and add the following to each runner:
Add tls_verify = false to each [[rnners]] or [runner.*] section
Add environment = ['GIT_SSL_NO_VERIFY=true'] to the [[rnners]] of [rnners.docker]
SSL issues are very serious ones, and disablign SSL verification is not a production option, however, this is the only solution for the past moth or so.
I'm seeing this with gitlab-runner 10.3.0 on Windows when attempting to pull a submodule (which uses relative URLs and thus gets translated to https://gitlab-ci-token:REDACTED@gitlab.aedev.com/group/project.git). The .git/config looks correct, with
I was able to workaround this issue by adding the troublesome certificate to my trusted store in CentOS 7. If you are using a non-redhat based OS, you're process will be different. I would discourage this process unless you eminently trust the server host and owner.
First of all, thank you for raising an issue to help improve the GitLab Runner product. We're sorry about this, but this particular issue has gone unnoticed for quite some time. To establish order in the GitLab-Runner Issue Tracker, we must ensure that every issue is correctly labelled and triaged, to get the proper attention.
We are automatically labelling this issue for attention as it meets the following criteria:
No activity in the past 3 months
Not labeled as ~bug or ~"feature proposal"
We'd like to ask you to help us out and determine whether this issue should be stay open or be closed.
If this issue is reporting a bug, please can you attempt to reproduce on the latest version of the runner, to help us to understand whether the bug still needs our attention.
If this issue is proposing a new feature, please can you verify whether the feature proposal is still relevant.
I'm getting this error as well. I noticed that $CI_SERVER_TLS_CA_FILE is the same content as $CI_SERVER_TLS_CERT_FILE. My assumption would be CI_SERVER_TLS_CA_FILE is the root CA trust cert. If git is expecting CI_SERVER_TLS_CA_FILE to be the root cert then I understand why it is failing.
Running with gitlab-runner 11.3.1 (0aa5179e)...Cloning repository...Cloning into '/builds/root/test'...fatal: unable to access 'https://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx@gitlab.local/root/test.git/': SSL certificate problem: unable to get local issuer certificate
The gitlab-runner behavior of injecting sslCAInfo into the git configuration is broken and does not work for submodules, as I've explained in #3497. If you're seeing this issue when attempting to clone submodules on non-docker runners, it's almost certainly because of #3497.
Please, when commenting, be sure to explain what type of executor you're using (e.g. shell, docker).
My issue is not with submodules, but with any git clone. I am using an internal CA and client certificates.
Running with gitlab-runner 11.3.1 (0aa5179e) on d021a88cd072 a1e262dcUsing Docker executor with image docker:stable ...Pulling docker image docker:stable ...Using docker image sha256:321f2cfcc3432bf7c18ee541c4cc4402d48156c1c7150f76026a2d3772369e89 for docker:stable ......Cloning repository...Cloning into '/builds/root/test'...fatal: unable to access 'https://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx@gitlab.local/root/test.git/': SSL certificate problem: unable to get local issuer certificate
I noticed $CI_SERVER_TLS_CA_FILE has the same content as $CI_SERVER_TLS_CERT_FILE. My assumption would be CI_SERVER_TLS_CA_FILE should be the root CA trust cert. If git is expecting CI_SERVER_TLS_CA_FILE to be the root cert then I understand why it is failing. From #3497 it does look like CI_SERVER_TLS_CA_FILE should be the CA root cert, but in my testing it is the client cert of the runner.
This is also happening for us with an internal root and intermedia CA
We do not wish to expose ourselves to GIT_SSL_NO_VERIFY to work around this
Our specified tls-ca-file has been tested with openssl s_client to verify it is correct
We are using image docker:dind
Running with gitlab-runner 11.3.1 (0aa5179e) on C0-AJWSLDEV01-GR-10.1.0 b7911857Using Docker executor with image microsoft/dotnet:2.1-sdk ...Starting service docker:dind ...Pulling docker image docker:dind ...Using docker image sha256:943cc2194c118472a134b2fee0bb7144c1c62ca415ff030d0cc00d43b81e29f7 for docker:dind ...Waiting for services to be up and running...Pulling docker image microsoft/dotnet:2.1-sdk ...Using docker image sha256:6a9b6788c55f953e297e5c682378381398700732f376ba2b881edf66db93c344 for microsoft/dotnet:2.1-sdk ...Running on runner-b7911857-project-422-concurrent-0 via ajwsldev01...Fetching changes...HEAD is now at ea720fc Update .gitlab-ci.ymlfatal: unable to access 'http://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx@src.europe.ajw/ajw-microservices/ADSProcessor.git/': SSL certificate problem: unable to get local issuer certificate
So it turns out, the issue was due to changing gitlab external_url to https and doing a reconfigure but the change not applying in the UI when we looked at the clone URLs (still showed http) and required a full restart of the server
Therefore the URLs being pulled by git runer were http:// and the certificate failing, the clue was in the error line
Same here. Had to inject our root certificate for the runner, had to make
git aware of it, I believe the next step that failed was pushing to the
registry - the registry didn't manage to communicate with the storage
backend (whatever that is by default in the gitlab helm chart), so we had
to also tell the registry to please trust our root CA.
Does that sound like a rabbit whole yet? Personally, I expect that we have
to go deeper...
@jeffcook@mikebolland It looks like you guys have a bug which is specific to client certificates. That might warrant opening a new issue that specifically mentions client certificates in the title.
Also getting this error on Kubernetes executor, with gitlab-runner helm chart:
Using a self-singed certificate;
Created a k8s secret with the certificate;
Referenced the secret in the helm values.yml
I'm having the same issue. Just set up a new cluster (1.14.1) with GitLab 11.11.
The cluster is protected by Cloudflare which seems to break the built-in kubernetes runner (can't finish pushing Docker image), so I registered a privileged Docker runner on the local machine with some /etc/hosts entries to try to get around it, and that's when I ran into this. The certificate seems to be valid. (It's issued through the included cert manager)
edit: oops, my /etc/hosts was pointing gitlab.mydomain.com to 127.0.0.1. Needed to point it to the node's externalIP, in other words, the same IP as I put for my wildcard DNS.
Also the same issue after upgrading to 12.4 - rolled back the client gitlab-runner to 12.20 in order to get it working again. Any info on what is causing this and/or if an update is in the works?
I got the below problem in runner.docker fatal: unable to access 'https://gitlabs.private-repo.com/my-project/app.git/': SSL certificate problem: unable to get local issuer certificate
Solved by add this below lines to [[runners]] in /etc/gitlab-runner/config.toml
Added the below environment to config.toml and restart the gitlab-runner environment = ["GIT_CURL_VERBOSE=1"]
Ran the pipeline , Pipeline failed with below output
Running with gitlab-runner 13.1.0 (6214287e) on My Docker Runner GNWEPzV1Preparing the "docker" executorUsing Docker executor with image myDockerHub/myImage:latest ...Authenticating with credentials from /root/.docker/config.jsonPulling docker image myDockerHub/myImage:latest ...Using docker image sha256:b3c3748c9844567d6584de6d957e249013effd7536fbdd848f73789bd2b09020 for myDockerHub/myImage:latest ...Preparing environmentRunning on runner-gnwepzv1-project-64-concurrent-0 via myLinuxHostname.localdomain...Getting source from Git repositoryFetching changes with git depth set to 50...Reinitialized existing Git repository in /builds/my-project/app/.git/* Trying 192.168.1.1:443...* TCP_NODELAY set* Connected to gitlabs.private-repo.com (192.168.1.1) port 443 (#0)* ALPN, offering h2* ALPN, offering http/1.1* successfully set certificate verify locations:* CAfile: /builds/my-project/app.tmp/CI_SERVER_TLS_CA_FILE CApath: none* SSL certificate problem: unable to get local issuer certificate* Closing connection 0fatal: unable to access 'https://gitlabs.private-repo.com/my-project/app.git/': SSL certificate problem: unable to get local issuer certificateERROR: Job failed: exit code 1
Started the container with alpine:latest and added git by apk add git
Added the SSL CA certificate path to the GIT Config, through below command git config --global http.https://gitlabs.private-repo.com/.sslcainfo /path/to/private-repo.ca.crt
Tried to clone the same private gitlab repo GIT_CURL_VERBOSE=1 git clone https://gitlabs.private-repo.com/my-project/app.git Successfully clone gitlab private repo in alpine 3.12
So, Added the below line to [[runner]] section of config.toml in gitlab-runner machine pre_clone_script = "sed -i 's/3.10/3.12/g' /etc/apk/repositories && apk update && apk upgrade"
Restarted the gitlab-runner service systemctl restart gitlab-runner
Ran again the pipeline. Pipeline passed
Suspecting
May be, due to build configuration of libcURL or ssl_client or libTLS in alpine 3.10 of gitlab-runner-helper image
kind of strange stuff happened here as well, after upgrading from an unsecure to an https instance...
What I saw, all worked more or less well on shell runners. All docker runners were kind of strange.
Started using the environment = \['GIT_SSL_NO_VERIFY=true'\] way.
This solved the cone issue, but made me really mad.
Additional, no artefact upload was possible. X509 self signed.
I checked my certs, all went well. Communication to the server was fine.
I spent "hours" of debugging and going nuts...
I finally made an observation. It seems the clone and artefact part is made using a docker container, not having the ssl certificates.
My solution was to replace the gitlab-runner on the machine itself and use the docker image. Mapping the /etc/ssl/certs into this machine.
This solved the issue completely.
My 2 cents...
The docker exector is making all the stuff using docker containers. Could this lead to missing certs in the running instance?
Shouldn't the gitlab-runner handle the ssl folder (at least the linux default) into the container?
I encountered this as well. The thing is, it worked for me for ages, since day one, but suddenly starting yesterday, it no longer works. My certificate is from Let's Encrypt and it is valid until Dec 7 2021. If I run the same git clone command on my local linux box, it works, just not in the runner. What is wrong?
This issue has found itself in our system as well. We have an ultimate omnibus installation running the latest 14.3.2 release. SSL cert chain has been verified and is up to date. No problems pulling down the repo locally but within the runners the repo cannot be cloned. server certificate verification failed. CAfile: none CRLfile: none. Have used the following runner implementations with the same results. Fargate w/ version 14.3.0, Kubernetes 14.2.0 (Helm Install), Ubuntu 14.3.2 all ending with the same results.
Please raise a support ticket. I'm not sure what's going on there, as I can't believe none of those environment support the new Let's Encrypt root, and you're not getting an error about an expired certificate.
If it's just secrets detection, and no other jobs, then it's seems unlikely to be the runner, unless you run all your secrets detection (and nothing else) on a specific runner server.
Seems more like a problem with a certificate store in the container image for secrets-detection.