We tell our users to install custom SSL certificates into /opt/gitlab/embedded/ssl/certs/. These certificates then get picked up by everything in omnibus-gitlab that uses OpenSSL.
However, we also have some Go programs in omnibus that use Go's own crypto/tls library instead of OpenSSL (e.g. gitlab-workhorse, see gitlab-workhorse#177 (closed)). These programs will ignore /opt/gitlab/embedded/ssl/certs/.
It turns out we can tell crypto/tls about /opt/gitlab/embedded/ssl/certs/ by setting SSL_CERT_DIR=/opt/gitlab/embedded/ssl/certs/. I suggest that we go through all our Runit services that spawn Go programs and add this setting to the default env hash. It has been reported in gitlab-workhorse#177 (closed) that this works.
I did run across the fact that the Pages service template was created in such a way that it explicitly set SSL_CERT_FILE in the exec entry of the generated run command for runit. We ran into this at some point in the past, and didn't address it in a more generalized manner.
I found the following binaries in to be Golang based, based on
for X in embedded/bin/* bin/* ; do [strings $X | grep GOROOT ] && echo $X ; done
@gitlab-org/distribution Is there a central location to populate xx['env'] values? I'll have all the individual attributes/default.rb populated with an MR shortly, but this thought crossed my mind.
As a status: I had originally made appropriate changes to the default attributes. I then realized a segment of the cookbooks did not have support for the actual configuration of the environments. I am in the process of making and propagating the necessary changes throughout.
I think we currently have this issue - we updated our SSL certificate yesterday.
Like always we place the private key my.domain.com.key and the public certificate (including it's full chain of trust) as my.domain.com.crt.bundle into /etc/gitlab/ssl.
First of all: Is /etc/gitlab/ssl (still?) the right directory, the documentation says so, but here I'm reading about /etc/gitlab/trusted_certs? That's really confusing.
Then to the actual issue: Even though SSL now works for any Web-Access or regular git access, the git lfs doesn't work anymore. Instead we've got this message:
Downloading MyFile.bin (248 KB) Error downloading object: MyFile.bin (e0fa9d5): Smudge error: Error downloading MyFile.bin (e0fa9d50aced312283bfc24b9f6f98c6b243b7e169e7951fa4ab19b16c207e78): batch response: Post https://my.domain.com/myGroup/myProject.git/info/lfs/objects/batch: x509: certificate signed by unknown authority
to our gitlab.rb, reconfigured everything. But still get the error above.
Right now we can't use git lfs anymore. This is really a problem for us. I'm really greatfull for all the work put into GitLab, but the documentation in regard to SSL isn't really good. It's seem extensive, but at least from our perspective is very hard to understand and it's unclear where we need to put our files.
Would you mind helping us with this issue - I think we sticked to the documentation and the information provided in this issue. But still git lfs is broken for now.
@ChristianSteffens This error you're seeing, is it on the client side or server side? The addition of SSL_CERT_DIR to the server processes is related to the configuration of trusted certificates at the server, and will not have any impact the clients.
Your clients will need appropriate configuration to their trusted certificate location(s), according to their operating system.
Well that's embarrassing - somewhere in the process of installing the certificates I removed the cacert.pem store from GitLab. Fixing that, everything works perfectly. Sorry for the false alarm.
Sorry for my not being completely verbose here with this note, but I was just setting up a GitLab instance with on-prem MinIO storage for uploads and ran into this error.
==> /var/log/gitlab/gitlab-workhorse/current <== {"error":"RequestError: send request failed\ncaused by: Put \"https://minio.example.com:9000/gitlab-uploads/tmp/uploads/1638135884-359806-0001-2905-5f2da4c5d7928704c32b7e2b36b8a189\": x509: certificate signed by unknown authority","level":"error","msg":"error uploading S3 session","time":"2021-11-28T15:44:45-06:00"} {"correlation_id":"01FNM83V2W3MBVXVKGCMXGD93K","error":"handleFileUploads: extract files from multipart: persisting multipart file: Put \"https://minio.example.com:9000/gitlab-uploads/tmp/uploads/1638135884-359806-0001-2905-5f2da4c5d7928704c32b7e2b36b8a189\": x509: certificate signed by unknown authority","level":"error","method":"POST","msg":"","time":"2021-11-28T15:44:45-06:00","uri":"/root/test-project-1001/uploads"} {"content_type":"text/plain; charset=utf-8","correlation_id":"01FNM83V2W3MBVXVKGCMXGD93K","duration_ms":609,"host":"gitlab.example.com","level":"info","method":"POST","msg":"access","proto":"HTTP/1.1","referrer":"https://gitlab.example.com/root/test-project-1001/-/merge_requests/1","remote_addr":"","remote_ip":"","route":"^/([^/]+/){1,}[^/]+/uploads\\z","status":500,"system":"http","time":"2021-11-28T15:44:45-06:00","ttfb_ms":608,"uri":"/root/test-project-1001/uploads","user_agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36","written_bytes":22}
I did a little head scratching and deduced that this issue might be related. Well, it turns out I had to stop and start GitLab to make it work.
Perhaps this is such an edge case that it should just be added to the manual here? Or maybe it is worth reopening this bug.
I am not in a position to reproduce this bug today, and I am by no means yet a GitLab expert or developer, but I think in this case the reconfigure command needs to send a signal to the workhorse to reload the certs. It would appear that the certificates for workhorse remain stuck in memory through a reconfigure. Or maybe I'm losing my mind.