Skip to content
GitLab
Next
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • GitLab GitLab
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
    • Locked Files
  • Issues 44,761
    • Issues 44,761
    • List
    • Boards
    • Service Desk
    • Milestones
    • Iterations
    • Requirements
  • Merge requests 1,332
    • Merge requests 1,332
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
    • Test Cases
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Container Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Metrics
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Code review
    • Insights
    • Issue
    • Repository
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • GitLab.orgGitLab.org
  • GitLabGitLab
  • Issues
  • #4584
Closed
Open
Issue created Jan 12, 2018 by Justin Lewis Salmon@jlsalmon

Document that Deploy Board doesn't support OpenShift DeploymentConfiguration objects

Hi GitLab team,

I'm having some troubles getting Deploy Boards working on my GitLab EE instance (10.3.3-ee). AFAICS from the docs, I've configured the Clusters feature to point at my k8s (openshift) cluster correctly. The API URL points to the base URL (https://kubernetes.example.com), and I'm using a token belonging to an admin user on the cluster. I've set the project namespace to be the same as the repo name. I've deployed a pod to a namespace with the same name, and added a label app=dev to match the dev environment slug.

However, the deploy board either shows a continuous loading spinner or shows "Kubernetes deployment not found".

I've queried the cluster and I can see the pod where I think it should be:

$ kubectl get pods -l app=dev
NAME              READY     STATUS    RESTARTS   AGE
account-1-v5zsf   1/1       Running   0          14h

I've looked briefly into GitLab's source and found where it's using kubeclient to load a list of pods. I added a logging statement and saw that it is fetching data, which looks like this:

{
  "metadata": {
    "name": "account-1-v5zsf",
    "generateName": "account-1-",
    "namespace": "account",
    "selfLink": "/api/v1/namespaces/account/pods/account-1-v5zsf",
    "uid": "4ae20f8a-f715-11e7-9da7-78e7d17de004",
    "resourceVersion": "116862",
    "creationTimestamp": "2018-01-11T21:20:48Z",
    "labels": {
      "app": "dev",
      "deployment": "account-1",
      "deploymentconfig": "account",
      "name": "account"
    },
    "annotations": {
      "kubernetes.io/created-by": "{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"account\",\"name\":\"account-1\",\"uid\":\"49c043d4-f715-11e7-9da7-78e7d17de004\",\"apiVersion\":\"v1\",\"resourceVersion\":\"116842\"}}\n",
      "openshift.io/container.account.image.entrypoint": "[\"/usr/bin/java\",\"-Djava.security.egd=file:/dev/./urandom\",\"-jar\",\"/app.jar\"]",
      "openshift.io/deployment-config.latest-version": "1",
      "openshift.io/deployment-config.name": "account",
      "openshift.io/deployment.name": "account-1",
      "openshift.io/generated-by": "OpenShiftNewApp",
      "openshift.io/scc": "restricted"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "account-1",
        "uid": "49c043d4-f715-11e7-9da7-78e7d17de004",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "account-volume-1",
        "emptyDir": {}
      },
      {
        "name": "default-token-qd6ft",
        "secret": {
          "secretName": "default-token-qd6ft",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "account",
        "image": "docker.example.com/core/account@sha256:811acb64ec3db876940dda6588d002fde288dd2bcb6c0e1c4319cd8e60fe7aec",
        "resources": {},
        "volumeMounts": [
          {
            "name": "account-volume-1",
            "mountPath": "/tmp"
          },
          {
            "name": "default-token-qd6ft",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "IfNotPresent",
        "securityContext": {
          "capabilities": {
            "drop": [
              "KILL",
              "MKNOD",
              "SETGID",
              "SETUID"
            ]
          },
          "privileged": false,
          "seLinuxOptions": {
            "level": "s0:c9,c4"
          },
          "runAsUser": 1000080000
        }
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "localhost",
    "securityContext": {
      "seLinuxOptions": {
        "level": "s0:c9,c4"
      },
      "fsGroup": 1000080000
    },
    "imagePullSecrets": [
      {
        "name": "default-dockercfg-z4jr2"
      }
    ],
    "schedulerName": "default-scheduler"
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": nil,
        "lastTransitionTime": "2018-01-11T21:20:48Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": nil,
        "lastTransitionTime": "2018-01-11T21:20:49Z"
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": nil,
        "lastTransitionTime": "2018-01-11T21:20:48Z"
      }
    ],
    "hostIP": "160.160.0.75",
    "podIP": "172.17.0.5",
    "startTime": "2018-01-11T21:20:48Z",
    "containerStatuses": [
      {
        "name": "account",
        "state": {
          "running": {
            "startedAt": "2018-01-11T21:20:49Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "docker.example.com/core/account:0.2.1-SNAPSHOT",
        "imageID": "docker-pullable://docker.example.com/core/account@sha256:811acb64ec3db876940dda6588d002fde288dd2bcb6c0e1c4319cd8e60fe7aec",
        "containerID": "docker://f1c5d5b5a469410f2df1ac4c1c6d5d7fc4f6546fc91959a569908f26b3694dbe"
      }
    ],
    "qosClass": "BestEffort"
  }
}

So it looks to me like kubeclient is fetching the pod correctly, but something goes wrong in feeding it back up to the frontend? I'm afraid I'm not familiar enough with GitLab's codebase to be able to do much more debugging than this.

I am running behind a corporate proxy if that makes any difference.

At this point, I would be hugely grateful if anyone could point me in the right direction!

Edited Oct 22, 2018 by Jason Yavorska
Assignee
Assign to
Time tracking