Skip to content

SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get local issuer certificate)

First of all, I don't think this is a bug or a problem in GitLab. It is most likely a misconfiguration in my system or something I am doing wrong. I would just like some guidance on what to look for.

My GitLab is running in Proxmox:LXC:Ubuntu 18.4 LTS:GitLab-ce 11.11.3 Omnibus. I setup Kubernetes in Windows 10:VMware Player:Linux Mint 18.2 using MicroK8s as it is recommended for systems with less than 32GB of RAM. I assigned my Mint VM 8GB of RAM.

There are two systems I am using, both are resolvable in my LAN by their hostnames GitLab.BigHouse and Asus-ROG-VM.BigHouse.

sudo -u git -H gitlab-rake gitlab:env:info

System information
System:		Ubuntu 18.04
Current User:	git
Using RVM:	no
Ruby Version:	2.5.3p105
Gem Version:	2.7.9
Bundler Version:1.17.3
Rake Version:	12.3.2
Redis Version:	3.2.12
Git Version:	2.21.0
Sidekiq Version:5.2.7
Go Version:	unknown

GitLab information
Version:	11.11.3
Revision:	e3eeb779d72
Directory:	/opt/gitlab/embedded/service/gitlab-rails
DB Adapter:	PostgreSQL
DB Version:	9.6.11
URL:		http://GitLab.BigHouse
HTTP Clone URL:	http://GitLab.BigHouse/some-group/some-project.git
SSH Clone URL:	git@GitLab.BigHouse:some-group/some-project.git
Using LDAP:	no
Using Omniauth:	yes
Omniauth Providers: 

GitLab Shell
Version:	9.1.0
Repository storage paths:
- default: 	/home/git/repositories
GitLab Shell path:		/opt/gitlab/embedded/service/gitlab-shell
Git:		/opt/gitlab/embedded/bin/git

What I am trying to do is connect my GitLab CI/CD to my Kubernetes in the VMware Mint. I have been using these pages to guide me.

The Microk8s kube-apiserver is setup by default with a self-signed certificate with the CN=127.0.0.1 so it is only valid for local API calls. I used curl from my GitLab.BigHouse to connect to the API at https://Asus-ROG-VM.BigHouse:16443 and got an SSL error as expected.

I used this guide to create a self-signed certificate with CN=Asus-ROG-VM.BigHouse Creating TLS certificates using Kubernetes API

I set the new certificate as arguments for kube-apiserver as tls-cert-file and tls-private-key-file in /var/snap/microk8s/current/args/kube-apiserver. I then restarted microk8s so the new certificate would take effect. I ran a few kubectl commands, it seems to still able to interact with the API locally.

On GitLab.BigHouse I manually added the PEM to the OS using these commands.

echo | openssl s_client -showcerts -connect Asus-ROG-VM.BigHouse:16443 2>/dev/null | openssl x509 -outform PEM > /usr/local/share/ca-certificates/kubernetes.Asus-ROG-VM.BigHouse.crt
update-ca-certificates
curl https://Asus-ROG-VM.BigHouse:16443
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}

This returns a 401 because I am not using a token, but it shows the SSL issue has been resolved, at least for curl it has.

I added the new PEM to the GitLab Kubernetes cluster details under CA Certificate. The token and URL had already been set. Then I go to the GitLab page to configure the runner and click on the button to install Helm, it fails with Something went wrong while installing Helm Tiller Kubernetes error:

The only error I am able to find in the logs is this.

tail -f /var/log/gitlab/gitlab-rails/kubernetes.log | jq
{
  "severity": "INFO",
  "time": "2019-06-18T22:20:53.454Z",
  "correlation_id": "ERgbpB1VQt8",
  "service": "Clusters::Applications::InstallService",
  "app_id": 8,
  "app_name": "helm",
  "project_ids": [
    15
  ],
  "group_ids": [],
  "event": "begin_install"
}
{
  "severity": "ERROR",
  "time": "2019-06-18T22:20:54.232Z",
  "correlation_id": "ERgbpB1VQt8",
  "exception": "Kubeclient::HttpError",
  "status_code": null,
  "namespace": "gitlab-managed-apps",
  "class_name": "Gitlab::Kubernetes::Namespace",
  "event": "failed_to_create_namespace",
  "message": "SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get local issuer certificate)"
}
{
  "severity": "ERROR",
  "time": "2019-06-18T22:20:54.233Z",
  "correlation_id": "ERgbpB1VQt8",
  "error_code": null,
  "service": "Clusters::Applications::InstallService",
  "app_id": 8,
  "app_name": "helm",
  "project_ids": [
    15
  ],
  "group_ids": [],
  "exception": "Kubeclient::HttpError",
  "message": "SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get local issuer certificate)",
  "backtrace": [
    "lib/gitlab/kubernetes/kube_client.rb:34:in `get_namespace'",
    "lib/gitlab/kubernetes/namespace.rb:14:in `exists?'",
    "lib/gitlab/kubernetes/namespace.rb:27:in `ensure_exists!'",
    "lib/gitlab/kubernetes/helm/api.rb:13:in `install'",
    "app/services/clusters/applications/install_service.rb:18:in `install'",
    "app/services/clusters/applications/install_service.rb:11:in `execute'",
    "app/workers/cluster_install_app_worker.rb:10:in `block in perform'",
    "app/workers/concerns/cluster_applications.rb:8:in `find_application'",
    "app/workers/cluster_install_app_worker.rb:9:in `perform'",
    "lib/gitlab/sidekiq_status/server_middleware.rb:7:in `call'",
    "lib/gitlab/sidekiq_middleware/correlation_logger.rb:10:in `block in call'",
    "lib/gitlab/sidekiq_middleware/correlation_logger.rb:9:in `call'",
    "lib/gitlab/sidekiq_middleware/batch_loader.rb:7:in `call'",
    "lib/gitlab/sidekiq_middleware/request_store_middleware.rb:8:in `call'",
    "lib/gitlab/sidekiq_middleware/memory_killer.rb:18:in `call'"
  ]
}

I should be able to test if this is reproducible by creating two new VMs and installing everything again.

I have not installed helm manually in GitLab because snapd will not run in the LXC without kernel changes to the Proxmox host. I have also not installed the helm binary in the $PATH as I don't know what the configuration would be. I want to be able to point different projects at their own Kubernetes cluster, so setting up helm manually would there to be one configuration for all projects in ~git. I really want to avoid that.

Looking for advice from the GitLab experts.

On a side note, the API URL ignores the port :16443 so on the Kubernetes VM I set a firewall rule to redirect the connection.

iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-ports 16443

I want to avoid this and get GitLab using 16443 for API calls as I would like the container to use 443.