Skip to content

Logs view UI improvements

Anna Vovchenko requested to merge anna/log-view-UI-fixes into master

What does this MR do and why?

This MR provides a couple of small UI improvements for the logs view. I think they are too small to create separate MRs with separate reviews, so I combined them together. Each commit represents a respective change.

  1. 3638ca47 Fix line number for larger numbers

Currently, once the line number is greater than 9999, it breaks out to 2 lines.

Before After
Screenshot_2024-06-19_at_16.04.41 Screenshot_2024-06-19_at_16.04.00
Screenshot_2024-06-19_at_16.04.51 Screenshot_2024-06-19_at_16.04.16
  1. d4975a84 Add Agent ID to the logs view header.
Before After
Screenshot 2024-06-19 at 20.40.07.png Screenshot 2024-06-19 at 20.39.46.png
  1. 3638ca47 and 84c34aeb Only update logs every 100ms.

Currently, we receive logs line by line and update the Apollo cache for each of them. This makes the browser overloaded and sometimes results in up to 1 minute of lagging before the page becomes fully responsible (happens for large logs - 10,000+ lines). With this update, we'll only update the cache once every 100ms (if needed). This reduced the page load time significantly for me:

Before After
58s till fully loaded and page interactive 5s till fully loaded and page interactive
Screen_Recording_2024-06-19_at_17.14.57 Screen_Recording_2024-06-19_at_17.20.14
  1. adc9d887 Prepand error message with "Error: " on the UI.

We are showing the error message as received from the k8s API, sometimes it can be not inline with our guidelines for the UI text. Adding "Error: " to the alert will make it clearer that the error message is coming from the API.

Before After
Screenshot_2024-06-19_at_20.25.42 Screenshot_2024-06-19_at_20.28.20

MR acceptance checklist

Please evaluate this MR against the MR acceptance checklist. It helps you analyze changes to reduce risks in quality, performance, reliability, security, and maintainability.

How to set up and validate locally

Easy way to verify without Kubernetes agent setup

  1. Visit the Environments page at Project -> Operate -> Environments page
  2. Create an environment
  3. Visit the Environment details page
  4. Apply patch ( copy and pbpaste | git apply)
Click to expand patch
diff --git a/app/assets/javascripts/environments/environment_details/components/kubernetes/kubernetes_logs.vue b/app/assets/javascripts/environments/environment_details/components/kubernetes/kubernetes_logs.vue
index 39a42e570b6d..36cbb5d4328e 100644
--- a/app/assets/javascripts/environments/environment_details/components/kubernetes/kubernetes_logs.vue
+++ b/app/assets/javascripts/environments/environment_details/components/kubernetes/kubernetes_logs.vue
@@ -84,6 +84,7 @@ export default {
       return this.k8sLogs?.error?.message || this.environmentError?.message;
     },
     gitlabAgentId() {
+      return '1';
       return (
         this.environment?.clusterAgent?.id &&
         getIdFromGraphQLId(this.environment.clusterAgent.id).toString()
diff --git a/app/assets/javascripts/environments/environment_details/index.vue b/app/assets/javascripts/environments/environment_details/index.vue
index f87723b33e9a..a0a4f8217406 100644
--- a/app/assets/javascripts/environments/environment_details/index.vue
+++ b/app/assets/javascripts/environments/environment_details/index.vue
@@ -46,7 +46,32 @@ export default {
         };
       },
       update(data) {
-        return data?.project?.environment;
+        const result = data?.project?.environment;
+        if (!result.clusterAgent) {
+          const clusterAgent = {
+            __typename: 'ClusterAgent',
+            id: 'gid://gitlab/Clusters::Agent/123',
+            name: 'my-agent',
+            webPath: '/gitlab-org/cluster-integration/-/cluster_agents/my-agent',
+            tokens: {
+              __typename: 'ClusterAgentTokenConnection',
+              nodes: [
+                {
+                  __typename: 'ClusterAgentToken',
+                  id: 'gid://gitlab/Clusters::AgentToken/123',
+                  lastUsedAt: '2023-05-30T11:47:56Z',
+                },
+              ],
+            },
+          };
+
+          return {
+            ...result,
+            clusterAgent,
+          };
+        }
+
+        return result;
       },
       result() {
         this.updateCurrentTab();
diff --git a/app/assets/javascripts/environments/graphql/resolvers/kubernetes/k8s_logs.js b/app/assets/javascripts/environments/graphql/resolvers/kubernetes/k8s_logs.js
index fe1ffc388301..d48c0a22c730 100644
--- a/app/assets/javascripts/environments/graphql/resolvers/kubernetes/k8s_logs.js
+++ b/app/assets/javascripts/environments/graphql/resolvers/kubernetes/k8s_logs.js
@@ -58,26 +58,40 @@ export const k8sLogs = (_, { configuration, namespace, podName, containerName },
   const watchQuery = { follow: true };
   if (containerName) watchQuery.container = containerName;
 
-  watchApi
-    .subscribeToStream(watchPath, watchQuery)
-    .then((watcher) => {
-      watcher.on(EVENT_PLAIN_TEXT, (data) => {
-        const logsData = cacheWrapper.readLogsData();
+  const testLogsData = [
+    {
+      id: 1,
+      content:
+        '{"level":"info","ts":"2024-06-10T11:58:54.533Z","logger":"controller-runtime.metrics","msg":"Metrics server is starting to listen","addr":":8080"}',
+    },
+    {
+      id: 2,
+      content:
+        '{"level":"info","ts":"2024-06-10T11:58:54.534Z","logger":"setup","msg":"starting manager"}',
+    },
+  ];
+  cacheWrapper.writeLogsData(testLogsData);
 
-        const updatedLogsData = [...logsData, { id: logsData.length + 1, content: data }];
+  // watchApi
+  //   .subscribeToStream(watchPath, watchQuery)
+  //   .then((watcher) => {
+  //     watcher.on(EVENT_PLAIN_TEXT, (data) => {
+  //       const logsData = cacheWrapper.readLogsData();
 
-        cacheWrapper.writeLogsData(updatedLogsData);
-      });
+  //       const updatedLogsData = [...logsData, { id: logsData.length + 1, content: data }];
 
-      watcher.on(EVENT_TIMEOUT, (err) => {
-        cacheWrapper.writeErrorData(err);
-      });
+  //       cacheWrapper.writeLogsData(updatedLogsData);
+  //     });
 
-      watcher.on(EVENT_ERROR, (err) => {
-        cacheWrapper.writeErrorData(err);
-      });
-    })
-    .catch((err) => {
-      cacheWrapper.writeErrorData(err);
-    });
+  //     watcher.on(EVENT_TIMEOUT, (err) => {
+  //       cacheWrapper.writeErrorData(err);
+  //     });
+
+  //     watcher.on(EVENT_ERROR, (err) => {
+  //       cacheWrapper.writeErrorData(err);
+  //     });
+  //   })
+  //   .catch((err) => {
+  //     cacheWrapper.writeErrorData(err);
+  //   });
 };
diff --git a/app/assets/javascripts/kubernetes_dashboard/graphql/helpers/resolver_helpers.js b/app/assets/javascripts/kubernetes_dashboard/graphql/helpers/resolver_helpers.js
index 95acd043025d..3d30b3108d65 100644
--- a/app/assets/javascripts/kubernetes_dashboard/graphql/helpers/resolver_helpers.js
+++ b/app/assets/javascripts/kubernetes_dashboard/graphql/helpers/resolver_helpers.js
@@ -123,6 +123,205 @@ export const getK8sPods = ({
     ? coreV1Api.listCoreV1NamespacedPod({ namespace })
     : coreV1Api.listCoreV1PodForAllNamespaces();
 
+  return [
+    {
+      status: {
+        phase: 'Succeeded',
+      },
+      spec: {
+        containers: [
+          {
+            name: 'helm',
+          },
+        ],
+      },
+      metadata: {
+        annotations: {
+          'helmcharts.helm.cattle.io/configHash':
+            'SHA256=E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855',
+        },
+        creationTimestamp: '2023-07-31T11:50:17.000Z',
+        labels: {
+          'controller-uid': '6c03c7fc-663b-4054-8842-37196eeb708b',
+          'helmcharts.helm.cattle.io/chart': 'traefik-crd',
+          'job-name': 'helm-install-traefik-crd',
+        },
+        name: 'helm-install-traefik-crd-gsj56',
+        namespace: 'kube-system',
+        resourceVersion: '576',
+        uid: 'd58f5220-9b8f-403a-9051-530559891bd7',
+      },
+      __typename: 'LocalWorkloadItem',
+    },
+    {
+      status: {
+        phase: 'Succeeded',
+      },
+      spec: {
+        containers: [
+          {
+            name: 'helm',
+          },
+        ],
+      },
+      metadata: {
+        annotations: {
+          'helmcharts.helm.cattle.io/configHash':
+            'SHA256=1EF9AFCD934FEF498C2BA9787FC96FA76D94643B6A79486BF80DA848478FBAF0',
+        },
+        creationTimestamp: '2023-07-31T11:50:17.000Z',
+        labels: {
+          'controller-uid': 'fa3d99b5-7137-4a36-b98e-bf8d9f7d1171',
+          'helmcharts.helm.cattle.io/chart': 'traefik',
+          'job-name': 'helm-install-traefik',
+        },
+        name: 'helm-install-traefik-csrck',
+        namespace: 'kube-system',
+        resourceVersion: '580',
+        uid: '4a25607b-91d6-4f5f-8e43-72f1f7445c4b',
+      },
+      __typename: 'LocalWorkloadItem',
+    },
+    {
+      status: {
+        phase: 'Succeeded',
+      },
+      spec: {
+        containers: [
+          {
+            name: 'local-path-provisioner',
+          },
+        ],
+      },
+      metadata: {
+        annotations: {},
+        creationTimestamp: '2023-07-31T11:50:17.000Z',
+        labels: {
+          app: 'local-path-provisioner',
+          'pod-template-hash': '79f67d76f8',
+        },
+        name: 'local-path-provisioner-79f67d76f8-sl8jw',
+        namespace: 'kube-system',
+        resourceVersion: '23266248',
+        uid: '6c2c9ffb-f472-42f0-815c-11a266f8f2af',
+      },
+      __typename: 'LocalWorkloadItem',
+    },
+    {
+      status: {
+        phase: 'Running',
+      },
+      spec: {
+        containers: [
+          {
+            name: 'coredns',
+          },
+        ],
+      },
+      metadata: {
+        annotations: {},
+        creationTimestamp: '2023-07-31T11:50:17.000Z',
+        labels: {
+          'k8s-app': 'kube-dns',
+          'pod-template-hash': '597584b69b',
+        },
+        name: 'coredns-597584b69b-r6wbx',
+        namespace: 'kube-system',
+        resourceVersion: '23266286',
+        uid: '9143156d-44a0-4d22-a958-669d3d3ae96d',
+      },
+      __typename: 'LocalWorkloadItem',
+    },
+    {
+      status: {
+        phase: 'Running',
+      },
+      spec: {
+        containers: [
+          {
+            name: 'lb-tcp-80',
+          },
+          {
+            name: 'lb-tcp-443',
+          },
+        ],
+      },
+      metadata: {
+        annotations: {},
+        creationTimestamp: '2023-07-31T11:50:17.000Z',
+        labels: {
+          app: 'svclb-traefik-79ffb0a8',
+          'controller-revision-hash': '767bb4755d',
+          'pod-template-generation': '1',
+          'svccontroller.k3s.cattle.io/svcname': 'traefik',
+          'svccontroller.k3s.cattle.io/svcnamespace': 'kube-system',
+        },
+        name: 'svclb-traefik-79ffb0a8-cw5v5',
+        namespace: 'kube-system',
+        resourceVersion: '23266323',
+        uid: '23630529-6f71-40b5-9074-9fc0850ea813',
+      },
+      __typename: 'LocalWorkloadItem',
+    },
+    {
+      status: {
+        phase: 'Running',
+      },
+      spec: {
+        containers: [
+          {
+            name: 'traefik',
+          },
+        ],
+      },
+      metadata: {
+        annotations: {
+          'prometheus.io/path': '/metrics',
+          'prometheus.io/port': '9100',
+          'prometheus.io/scrape': 'true',
+        },
+        creationTimestamp: '2023-07-31T11:50:17.000Z',
+        labels: {
+          'app.kubernetes.io/instance': 'traefik-kube-system',
+          'app.kubernetes.io/managed-by': 'Helm',
+          'app.kubernetes.io/name': 'traefik',
+          'helm.sh/chart': 'traefik-20.3.1_up20.3.0',
+          'pod-template-hash': '66c46d954f',
+        },
+        name: 'traefik-66c46d954f-mdwwn',
+        namespace: 'kube-system',
+        resourceVersion: '23266369',
+        uid: '2a04dd82-809d-42a0-8439-e655bcf53056',
+      },
+      __typename: 'LocalWorkloadItem',
+    },
+    {
+      status: {
+        phase: 'Running',
+      },
+      spec: {
+        containers: [
+          {
+            name: 'metrics-server',
+          },
+        ],
+      },
+      metadata: {
+        annotations: {},
+        creationTimestamp: '2023-07-31T11:50:17.000Z',
+        labels: {
+          'k8s-app': 'metrics-server',
+          'pod-template-hash': '5f9f776df5',
+        },
+        name: 'metrics-server-5f9f776df5-4ts4p',
+        namespace: 'kube-system',
+        resourceVersion: '23266399',
+        uid: '847d09e2-d9c0-4a54-9e97-e3d5d273e5d6',
+      },
+      __typename: 'LocalWorkloadItem',
+    },
+  ];
+
   return podsApi
     .then((res) => {
       const watchPath = buildWatchPath({ resource: 'pods', namespace });
  1. Click the View logs button in the pods table. It should navigate to https://gdk.test:3443/<group-name>/<project-name>/-/environments/<environment-id>/k8s/namespace/<namespace-name>/pods/<pod-name>/logs?container=<container-name>
  2. Verify the changes.

Note: You might see errors in the console and on the UI as the component will try to connect to a non-existent agent.

Verify using real data

Prerequisites:

  1. Visit the Project -> Infrastructure -> Kubernetes clusters page and create an agent following the instructions from the modal.

    • Select the "Connect a cluster" button
    • The modal should pop up
    • In the modal select "Select an agent or enter a name to create new"
    • You probably won't have any configured agents to show up in the list, create a new one by typing the name of your choice
    • The button should appear at the bottom of the list saying "Create agent: <your-agent-name>"
    • Select the button and click "Register" in the next view.
    • Save the token to use it in the next point.
  2. Add the following configuration inside your project in .gitlab/agents/<your-agent-name>/config.yaml for the user_access agent:

    user_access:
      access_as: 
        agent: {}
      projects:
      - id: <your-group>/<your-project-to-share-agent-with>
  3. Note that the shared agents should be connected to the cluster in order to appear in the list. Please follow points 3-8 from the guide and then the Deploy the GitLab Agent (agentk) with k3d section to create a local cluster and connect your agent with the cluster.

  4. Visit Project -> Operate -> Environments

  5. Create/Edit an environment using the UI

  6. Select an agent from the dropdown in the Environments settings page and save the change.

  7. Visit the Environments page and visit the environment details page for the environment that has an associated agent.

  8. Click the View logs button in the pods table. It should navigate to https://gdk.test:3443/<group-name>/<project-name>/-/environments/<environment-id>/k8s/namespace/<namespace-name>/pods/<pod-name>/logs?container=<container-name>

  9. Verify the changes.

Edited by Anna Vovchenko

Merge request reports