Skip to content
GitLab Next
  • Menu
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • GitLab GitLab
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
    • Locked Files
  • Issues 44,097
    • Issues 44,097
    • List
    • Boards
    • Service Desk
    • Milestones
    • Iterations
    • Requirements
  • Merge requests 1,306
    • Merge requests 1,306
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
    • Test Cases
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Container Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Metrics
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Code review
    • Insights
    • Issue
    • Repository
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • GitLab.orgGitLab.org
  • GitLabGitLab
  • Issues
  • #352284
Closed
Open
Created Feb 08, 2022 by Corfitz@imCorfitz

GitLab KAS (Agent) - A certificate is required for KAS

Proposal

However - I have been told that kubectl doesn't work over http protocol but only https, and our omnibus installation doesn't have an SSL/TLS certificate. Could this be a playing factor? Though I did try and pass the --insecure-skip-tls-verify=true flag to see if it made a difference, but doesn't look like it.

Oh, that explains it all. Yes, kubectl would never send credentials over a cleartext connection. --insecure-skip-tls-verify wouldn't help, it just disables TLS verification, but TLS is still required for it to send credentials. We should document this somewhere.

Initial report

I am experiencing an issue using the Gitlab Agent. The context is passed correctly, but the moment I execute a kubectl get pods command, it says I need to be logged in. This is a brand new kubernetes cluster, and the agent had just been created. And there is no way for me to clear a cluster cache on an agent as according to this topic.

6341ff5f427879d96a66c7c3d1b46007c4c64f10

Note: Both agents/contexts are having the same issue. Only difference is that primary-agent is a kubernetes cluster version 1.23 and test-v20-agent is a version 1.20. I was not sure if this was the cause of the problem, so that's why I tested it.

f9a4f8abdb48c89bb707fc476592eeacffba254b

I am using a self-hosted GitLab installation omnibus with version 14.7.0-ee, and a K8S cluster version v1.20.15. The agent and runner are successfully connected from K8S, and using Kaniko, I can build an application and send to the GitLab container registry. But I can't execute kubectl commands.

df3d7a4c74ffada2ccc13b80684d8326b52f7e3f

My .gitlab-ci.yml file looks like this:

deploy:
   stage: deploy
   allow_failure: true
   image:
     name: bitnami/kubectl:latest
     entrypoint: [""]
   script:
   - kubectl config get-contexts
   - kubectl config use-context peppeo/k8s-agent:test-v20-agent
   - kubectl config view
   - kubectl get pods

From the kubectl config view command, I could see that it called the k8s-proxy on my gitlab instance, so I ran sudo gitlab-ctl tail on the GitLab server to see if anything stood out, and I noticed that when the kubectl command were to execute, it called the k8s-proxy but my server responded with 401 error. So is there something I need to configure on my Omnibus installation to make it work?

c4decd90701b72049c223f1747d8e7187934f4ca

While trying to test it out, I uncommented everything in the .gitlab-ci.yml file but the kubectl get pods command. This gave me a different error:

22a3a16330863e537fbd35479733bd411093ebc9

When searching for a solution, I then found this article on Medium which explained I had to create a Role and RoleBinding in order for Gitlab runner to get pods and create such. It confuses me, as there already is a ClusterRole and ClusterRoleBinding set up during the Helm installation of the Gitlab Runner, but I did as the article said and then I could see the runner pods when running kubectl get pods

a7ef1d365047bf5b9e5ca36b5a3dc050c7c2fb7d

However, if I follow the GitLab documentation and use a particular context: kubectl config use-context peppeo/k8s-agent:test-v20-agent I once again get the "You must be logged in to the server..." error.

I don't understand why I have to create additional Roles and Rolebindings manually after Helm installation, and I don't understand how I can get some pods when not using the context and nothing when I do use them.

Output of checks

Results of GitLab environment info

Expand for output related to GitLab environment info


System information
System:
Proxy:          no
Current User:   git
Using RVM:      no
Ruby Version:   2.7.5p203
Gem Version:    3.1.4
Bundler Version:2.1.4
Rake Version:   13.0.6
Redis Version:  6.0.16
Git Version:    2.33.1.
Sidekiq Version:6.3.1
Go Version:     unknown

GitLab information
Version:        14.7.0-ee
Revision:       621e5984888
Directory:      /opt/gitlab/embedded/service/gitlab-rails
DB Adapter:     PostgreSQL
DB Version:     12.7
URL:            http://lab.ximore.com
HTTP Clone URL: http://lab.ximore.com/some-group/some-project.git
SSH Clone URL:  git@lab.ximore.com:some-group/some-project.git
Elasticsearch:  no
Geo:            no
Using LDAP:     no
Using Omniauth: yes
Omniauth Providers: 

GitLab Shell
Version:        13.22.2
Repository storage paths:
- default:      /var/opt/gitlab/git-data/repositories
GitLab Shell path:              /opt/gitlab/embedded/service/gitlab-shell
Git:            /opt/gitlab/embedded/bin/git

Results of GitLab application Check

Expand for output related to the GitLab application check

Checking GitLab subtasks ...

Checking GitLab Shell ...

GitLab Shell: ... GitLab Shell version >= 13.22.2 ? ... OK (13.22.2) Running /opt/gitlab/embedded/service/gitlab-shell/bin/check Internal API available: OK Redis available via internal API: OK gitlab-shell self-check successful

Checking GitLab Shell ... Finished

Checking Gitaly ...

Gitaly: ... default ... OK

Checking Gitaly ... Finished

Checking Sidekiq ...

Sidekiq: ... Running? ... yes Number of Sidekiq processes (cluster/worker) ... 1/1

Checking Sidekiq ... Finished

Checking Incoming Email ...

Incoming Email: ... Reply by email is disabled in config/gitlab.yml

Checking Incoming Email ... Finished

Checking LDAP ...

LDAP: ... LDAP is disabled in config/gitlab.yml

Checking LDAP ... Finished

Checking GitLab App ...

Git configured correctly? ... yes Database config exists? ... yes All migrations up? ... yes Database contains orphaned GroupMembers? ... no GitLab config exists? ... yes GitLab config up to date? ... yes Log directory writable? ... yes Tmp directory writable? ... yes Uploads directory exists? ... yes Uploads directory has correct permissions? ... yes Uploads directory tmp has correct permissions? ... skipped (no tmp uploads folder yet) Systemd unit files or init script exist? ... skipped (omnibus-gitlab has neither init script nor systemd units) Systemd unit files or init script up-to-date? ... skipped (omnibus-gitlab has neither init script nor systemd units) Projects have namespace: ... 6/2 ... yes 6/5 ... yes 7/6 ... yes 7/7 ... yes 7/8 ... yes 7/9 ... yes 7/10 ... yes 8/11 ... yes 8/12 ... yes 8/13 ... yes 8/14 ... yes 8/15 ... yes 8/16 ... yes 8/17 ... yes 10/18 ... yes 11/19 ... yes 10/20 ... yes 10/21 ... yes 6/22 ... yes 13/23 ... yes 13/24 ... yes 13/25 ... yes 13/26 ... yes 6/27 ... yes 15/28 ... yes 14/29 ... yes 15/30 ... yes 16/31 ... yes 16/32 ... yes 15/34 ... yes 16/36 ... yes 18/37 ... yes 18/38 ... yes 19/39 ... yes 14/40 ... yes 21/41 ... yes 24/42 ... yes 24/43 ... yes 24/44 ... yes 25/45 ... yes 14/46 ... yes 30/47 ... yes 30/48 ... yes 6/49 ... yes 31/50 ... yes 21/52 ... yes 34/53 ... yes 34/54 ... yes 34/55 ... yes 35/56 ... yes 35/57 ... yes 35/58 ... yes 14/60 ... yes Redis version >= 5.0.0? ... yes Ruby version >= 2.7.2 ? ... yes (2.7.5) Git version >= 2.33.0 ? ... yes (2.33.1) Git user has default SSH configuration? ... yes Active users: ... 3 Is authorized keys file accessible? ... yes GitLab configured to store new projects in hashed storage? ... yes All projects are in hashed storage? ... yes Elasticsearch version 7.x (6.4 - 6.x deprecated to be removed in 13.8)? ... skipped (elasticsearch is disabled)

Checking GitLab App ... Finished

Checking GitLab subtasks ... Finished


This issue is created based on this topic in Gitlab Forum. Since it turns out more people have the same issue, and after me hiring a Gitlab consultant that couldn't resolve the issue neither, I believe it is worth reporting.

Edited Feb 23, 2022 by Viktor Nagy (GitLab)
Assignee
Assign to
Time tracking