Skip to content

Scaffold Metrics Details UI (frontend)

What does this MR do and why?

On selecting a metric from the metrics list, the user is taken to the metrics details UI

  • Created new Metric Details base components
  • On selecting a metric from the metrics list, redirects to the details URL

Related BE changes: !137027 (merged)

Part of gitlab-org/opstrace/opstrace#2539 (closed)

(Note this is an Experimental feature. No users yet)

Screenshots or screen recordings

image

Video Screen_Recording_2023-11-15_at_18.16.53

How to set up and validate locally

  • Enable observability_metrics feature flag
  • Pull BE changes from !137027 (merged)

Apply patch to load mocks ( pbpaste | git apply --allow-empty )

diff --git a/app/assets/javascripts/observability/client.js b/app/assets/javascripts/observability/client.js
index 32ff7fff128f..7e81206f51fa 100644
--- a/app/assets/javascripts/observability/client.js
+++ b/app/assets/javascripts/observability/client.js
@@ -1,21 +1,982 @@
+/* eslint-disable @gitlab/require-i18n-strings */
 import * as Sentry from '~/sentry/sentry_browser_wrapper';
 import axios from '~/lib/utils/axios_utils';
 import { logError } from '~/lib/logger';
 import { DEFAULT_SORTING_OPTION, SORTING_OPTIONS } from './constants';
 
+const MOCK_METRICS = {
+  metrics: [
+    {
+      name: 'app.ads.ad_requests',
+      description: 'Counts ad requests by request and response type',
+      type: 'Sum',
+    },
+    {
+      name: 'app.frontend.requests',
+      description: '',
+      type: 'Sum',
+    },
+    {
+      name: 'app.payment.transactions',
+      description: '',
+      type: 'Sum',
+    },
+    {
+      name: 'app_currency_counter',
+      description: '',
+      type: 'Sum',
+    },
+    {
+      name: 'app_recommendations_counter',
+      description: 'Counts the total number of given recommendations',
+      type: 'Sum',
+    },
+    {
+      name: 'http.client.duration',
+      description: 'measures the duration of the outbound HTTP request',
+      type: 'Histogram',
+    },
+    {
+      name: 'http.server.duration',
+      description: 'Measures the duration of inbound HTTP requests.',
+      type: 'Histogram',
+    },
+    {
+      name: 'kafka.consumer.assigned_partitions',
+      description: 'The number of partitions currently assigned to this consumer',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.bytes_consumed_rate',
+      description: 'The average number of bytes consumed per second',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.bytes_consumed_total',
+      description: 'The total number of bytes consumed',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.commit_latency_avg',
+      description: 'The average time taken for a commit request',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.commit_latency_max',
+      description: 'The max time taken for a commit request',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.commit_rate',
+      description: 'The number of commit calls per second',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.commit_sync_time_ns_total',
+      description: 'The total time the consumer has spent in commitSync in nanoseconds',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.commit_total',
+      description: 'The total number of commit calls',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.committed_time_ns_total',
+      description: 'The total time the consumer has spent in committed in nanoseconds',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.connection_close_rate',
+      description: 'The number of connections closed per second',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.connection_close_total',
+      description: 'The total number of connections closed',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.connection_count',
+      description: 'The current number of active connections.',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.connection_creation_rate',
+      description: 'The number of new connections established per second',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.connection_creation_total',
+      description: 'The total number of new connections established',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.failed_authentication_rate',
+      description: 'The number of connections with failed authentication per second',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.failed_authentication_total',
+      description: 'The total number of connections with failed authentication',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.failed_reauthentication_rate',
+      description: 'The number of failed re-authentication of connections per second',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.failed_reauthentication_total',
+      description: 'The total number of failed re-authentication of connections',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.failed_rebalance_rate_per_hour',
+      description: 'The number of failed rebalance events per hour',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.failed_rebalance_total',
+      description: 'The total number of failed rebalance events',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.fetch_latency_avg',
+      description: 'The average time taken for a fetch request.',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.fetch_latency_max',
+      description: 'The max time taken for any fetch request.',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.fetch_rate',
+      description: 'The number of fetch requests per second.',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.fetch_size_avg',
+      description: 'The average number of bytes fetched per request',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.fetch_size_max',
+      description: 'The maximum number of bytes fetched per request',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.fetch_throttle_time_avg',
+      description: 'The average throttle time in ms',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.fetch_throttle_time_max',
+      description: 'The maximum throttle time in ms',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.fetch_total',
+      description: 'The total number of fetch requests.',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.heartbeat_rate',
+      description: 'The number of heartbeats per second',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.heartbeat_response_time_max',
+      description: 'The max time taken to receive a response to a heartbeat request',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.heartbeat_total',
+      description: 'The total number of heartbeats',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.incoming_byte_rate',
+      description: 'The number of bytes read off all sockets per second',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.incoming_byte_total',
+      description: 'The total number of bytes read off all sockets',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.io_ratio',
+      description: '*Deprecated* The fraction of time the I/O thread spent doing I/O',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.io_time_ns_avg',
+      description: 'The average length of time for I/O per select call in nanoseconds.',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.io_time_ns_total',
+      description: 'The total time the I/O thread spent doing I/O',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.io_wait_ratio',
+      description: '*Deprecated* The fraction of time the I/O thread spent waiting',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.io_wait_time_ns_avg',
+      description:
+        'The average length of time the I/O thread spent waiting for a socket ready for reads or writes in nanoseconds.',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.io_wait_time_ns_total',
+      description: 'The total time the I/O thread spent waiting',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.io_waittime_total',
+      description: '*Deprecated* The total time the I/O thread spent waiting',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.iotime_total',
+      description: '*Deprecated* The total time the I/O thread spent doing I/O',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.join_rate',
+      description: 'The number of group joins per second',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.join_time_avg',
+      description: 'The average time taken for a group rejoin',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.join_time_max',
+      description: 'The max time taken for a group rejoin',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.join_total',
+      description: 'The total number of group joins',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.last_heartbeat_seconds_ago',
+      description: 'The number of seconds since the last coordinator heartbeat was sent',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.last_poll_seconds_ago',
+      description: 'The number of seconds since the last poll() invocation.',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.last_rebalance_seconds_ago',
+      description: 'The number of seconds since the last successful rebalance event',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.network_io_rate',
+      description:
+        'The number of network operations (reads or writes) on all connections per second',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.network_io_total',
+      description: 'The total number of network operations (reads or writes) on all connections',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.outgoing_byte_rate',
+      description: 'The number of outgoing bytes sent to all servers per second',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.outgoing_byte_total',
+      description: 'The total number of outgoing bytes sent to all servers',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.partition_assigned_latency_avg',
+      description: 'The average time taken for a partition-assigned rebalance listener callback',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.partition_assigned_latency_max',
+      description: 'The max time taken for a partition-assigned rebalance listener callback',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.partition_lost_latency_avg',
+      description: 'The average time taken for a partition-lost rebalance listener callback',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.partition_lost_latency_max',
+      description: 'The max time taken for a partition-lost rebalance listener callback',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.poll_idle_ratio_avg',
+      description:
+        "The average fraction of time the consumer's poll() is idle as opposed to waiting for the user code to process records.",
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.rebalance_latency_avg',
+      description:
+        'The average time taken for a group to complete a successful rebalance, which may be composed of several failed re-trials until it succeeded',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.rebalance_latency_max',
+      description:
+        'The max time taken for a group to complete a successful rebalance, which may be composed of several failed re-trials until it succeeded',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.rebalance_latency_total',
+      description:
+        'The total number of milliseconds this consumer has spent in successful rebalances since creation',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.rebalance_rate_per_hour',
+      description:
+        'The number of successful rebalance events per hour, each event is composed of several failed re-trials until it succeeded',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.rebalance_total',
+      description:
+        'The total number of successful rebalance events, each event is composed of several failed re-trials until it succeeded',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.records_consumed_rate',
+      description: 'The average number of records consumed per second',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.records_consumed_total',
+      description: 'The total number of records consumed',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.records_lag',
+      description: 'The latest lag of the partition',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.records_lag_avg',
+      description: 'The average lag of the partition',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.records_lag_max',
+      description:
+        'The maximum lag in terms of number of records for any partition in this window. NOTE: This is based on current offset and not committed offset',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.records_lead',
+      description: 'The latest lead of the partition',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.records_lead_avg',
+      description: 'The average lead of the partition',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.records_lead_min',
+      description:
+        'The minimum lead in terms of number of records for any partition in this window',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.records_per_request_avg',
+      description: 'The average number of records in each request',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.request_rate',
+      description: 'The number of requests sent per second',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.request_size_avg',
+      description: 'The average size of requests sent.',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.request_size_max',
+      description: 'The maximum size of any request sent.',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.request_total',
+      description: 'The total number of requests sent',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.response_rate',
+      description: 'The number of responses received per second',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.response_total',
+      description: 'The total number of responses received',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.select_rate',
+      description: 'The number of times the I/O layer checked for new I/O to perform per second',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.select_total',
+      description: 'The total number of times the I/O layer checked for new I/O to perform',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.successful_authentication_no_reauth_total',
+      description:
+        'The total number of connections with successful authentication where the client does not support re-authentication',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.successful_authentication_rate',
+      description: 'The number of connections with successful authentication per second',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.successful_authentication_total',
+      description: 'The total number of connections with successful authentication',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.successful_reauthentication_rate',
+      description: 'The number of successful re-authentication of connections per second',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.successful_reauthentication_total',
+      description: 'The total number of successful re-authentication of connections',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.sync_rate',
+      description: 'The number of group syncs per second',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.sync_time_avg',
+      description: 'The average time taken for a group sync',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.sync_time_max',
+      description: 'The max time taken for a group sync',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.sync_total',
+      description: 'The total number of group syncs',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.consumer.time_between_poll_avg',
+      description: 'The average delay between invocations of poll() in milliseconds.',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.consumer.time_between_poll_max',
+      description: 'The max delay between invocations of poll() in milliseconds.',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.controller.active.count',
+      description: 'The number of controllers active on the broker',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.isr.operation.count',
+      description: 'The number of in-sync replica shrink and expand operations',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.lag.max',
+      description: 'The max lag in messages between follower and leader replicas',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.logs.flush.Count',
+      description: 'Log flush count',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.logs.flush.time.50p',
+      description: 'Log flush time - 50th percentile',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.logs.flush.time.99p',
+      description: 'Log flush time - 99th percentile',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.message.count',
+      description: 'The number of messages received by the broker',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.network.io',
+      description: 'The bytes received or sent by the broker',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.partition.count',
+      description: 'The number of partitions on the broker',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.partition.offline',
+      description: 'The number of partitions offline',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.partition.underReplicated',
+      description: 'The number of under replicated partitions',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.purgatory.size',
+      description: 'The number of requests waiting in purgatory',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.request.count',
+      description: 'The number of requests received by the broker',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.request.failed',
+      description: 'The number of requests to the broker resulting in a failure',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.request.queue',
+      description: 'Size of the request queue',
+      type: 'Sum',
+    },
+    {
+      name: 'kafka.request.time.50p',
+      description: 'The 50th percentile time the broker has taken to service requests',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.request.time.99p',
+      description: 'The 99th percentile time the broker has taken to service requests',
+      type: 'Gauge',
+    },
+    {
+      name: 'kafka.request.time.total',
+      description: 'The total time the broker has taken to service requests',
+      type: 'Sum',
+    },
+    {
+      name: 'otlp.exporter.exported',
+      description: '',
+      type: 'Sum',
+    },
+    {
+      name: 'otlp.exporter.seen',
+      description: '',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.cpython.cpu_time',
+      description: 'Runtime cpython CPU time',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.cpython.gc_count',
+      description: 'Runtime cpython GC count',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.cpython.memory',
+      description: 'Runtime cpython memory',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.dotnet.assemblies.count',
+      description: 'The number of .NET assemblies that are currently loaded.',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.dotnet.exceptions.count',
+      description:
+        'Count of exceptions that have been thrown in managed code, since the observation started. The value will be unavailable until an exception has been thrown after OpenTelemetry.Instrumentation.Runtime initialization.',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.dotnet.gc.allocations.size',
+      description:
+        'Count of bytes allocated on the managed GC heap since the process start. .NET objects are allocated from this heap. Object allocations from unmanaged languages such as C/C++ do not use this heap.',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.dotnet.gc.collections.count',
+      description: 'Number of garbage collections that have occurred since process start.',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.dotnet.gc.committed_memory.size',
+      description:
+        'The amount of committed virtual memory for the managed GC heap, as observed during the latest garbage collection. Committed virtual memory may be larger than the heap size because it includes both memory for storing existing objects (the heap size) and some extra memory that is ready to handle newly allocated objects in the future. The value will be unavailable until at least one garbage collection has occurred.',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.dotnet.gc.heap.size',
+      description:
+        'The heap size (including fragmentation), as observed during the latest garbage collection. The value will be unavailable until at least one garbage collection has occurred.',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.dotnet.gc.objects.size',
+      description:
+        "Count of bytes currently in use by objects in the GC heap that haven't been collected yet. Fragmentation and other GC committed memory pools are excluded.",
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.dotnet.jit.compilation_time',
+      description:
+        'The amount of time the JIT compiler has spent compiling methods since the process start.',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.dotnet.jit.il_compiled.size',
+      description:
+        'Count of bytes of intermediate language that have been compiled since the process start.',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.dotnet.jit.methods_compiled.count',
+      description:
+        'The number of times the JIT compiler compiled a method since the process start. The JIT compiler may be invoked multiple times for the same method to compile with different generic parameters, or because tiered compilation requested different optimization settings.',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.dotnet.monitor.lock_contention.count',
+      description:
+        'The number of times there was contention when trying to acquire a monitor lock since the process start. Monitor locks are commonly acquired by using the lock keyword in C#, or by calling Monitor.Enter() and Monitor.TryEnter().',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.dotnet.thread_pool.completed_items.count',
+      description:
+        'The number of work items that have been processed by the thread pool since the process start.',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.dotnet.thread_pool.queue.length',
+      description:
+        'The number of work items that are currently queued to be processed by the thread pool.',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.dotnet.thread_pool.threads.count',
+      description: 'The number of thread pool threads that currently exist.',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.dotnet.timer.count',
+      description:
+        'The number of timer instances that are currently active. Timers can be created by many sources such as System.Threading.Timer, Task.Delay, or the timeout in a CancellationSource. An active timer is registered to tick at some point in the future and has not yet been canceled.',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.go.cgo.calls',
+      description: 'Number of cgo calls made by the current process',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.go.gc.count',
+      description: 'Number of completed garbage collection cycles',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.go.gc.pause_ns',
+      description: 'Amount of nanoseconds in GC stop-the-world pauses',
+      type: 'Histogram',
+    },
+    {
+      name: 'process.runtime.go.gc.pause_total_ns',
+      description: 'Cumulative nanoseconds in GC stop-the-world pauses since the program started',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.go.goroutines',
+      description: 'Number of goroutines that currently exist',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.go.mem.heap_alloc',
+      description: 'Bytes of allocated heap objects',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.go.mem.heap_idle',
+      description: 'Bytes in idle (unused) spans',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.go.mem.heap_inuse',
+      description: 'Bytes in in-use spans',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.go.mem.heap_objects',
+      description: 'Number of allocated heap objects',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.go.mem.heap_released',
+      description: 'Bytes of idle spans whose physical memory has been returned to the OS',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.go.mem.heap_sys',
+      description: 'Bytes of heap memory obtained from the OS',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.go.mem.live_objects',
+      description: 'Number of live objects is the number of cumulative Mallocs - Frees',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.go.mem.lookups',
+      description: 'Number of pointer lookups performed by the runtime',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.jvm.buffer.count',
+      description: 'The number of buffers in the pool',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.jvm.buffer.limit',
+      description: 'Total capacity of the buffers in this pool',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.jvm.buffer.usage',
+      description: 'Memory that the Java virtual machine is using for this buffer pool',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.jvm.classes.current_loaded',
+      description: 'Number of classes currently loaded',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.jvm.classes.loaded',
+      description: 'Number of classes loaded since JVM start',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.jvm.classes.unloaded',
+      description: 'Number of classes unloaded since JVM start',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.jvm.cpu.utilization',
+      description: 'Recent cpu utilization for the process',
+      type: 'Gauge',
+    },
+    {
+      name: 'process.runtime.jvm.gc.duration',
+      description: 'Duration of JVM garbage collection actions',
+      type: 'Histogram',
+    },
+    {
+      name: 'process.runtime.jvm.memory.committed',
+      description: 'Measure of memory committed',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.jvm.memory.init',
+      description: 'Measure of initial memory requested',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.jvm.memory.limit',
+      description: 'Measure of max obtainable memory',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.jvm.memory.usage',
+      description: 'Measure of memory used',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.jvm.memory.usage_after_last_gc',
+      description:
+        'Measure of memory used after the most recent garbage collection event on this pool',
+      type: 'Sum',
+    },
+    {
+      name: 'process.runtime.jvm.system.cpu.load_1m',
+      description: 'Average CPU load of the whole system for the last minute',
+      type: 'Gauge',
+    },
+    {
+      name: 'process.runtime.jvm.system.cpu.utilization',
+      description: 'Recent cpu utilization for the whole system',
+      type: 'Gauge',
+    },
+    {
+      name: 'process.runtime.jvm.threads.count',
+      description: 'Number of executing threads',
+      type: 'Sum',
+    },
+    {
+      name: 'processedLogs',
+      description:
+        'The number of logs processed by the BatchLogRecordProcessor. [dropped=true if they were dropped due to high throughput]',
+      type: 'Sum',
+    },
+    {
+      name: 'processedSpans',
+      description:
+        'The number of spans processed by the BatchSpanProcessor. [dropped=true if they were dropped due to high throughput]',
+      type: 'Sum',
+    },
+    {
+      name: 'queueSize',
+      description: 'The number of items queued',
+      type: 'Gauge',
+    },
+    {
+      name: 'rpc.client.duration',
+      description: 'The duration of an outbound RPC invocation',
+      type: 'Histogram',
+    },
+    {
+      name: 'rpc.server.duration',
+      description: 'The duration of an inbound RPC invocation',
+      type: 'Histogram',
+    },
+    {
+      name: 'runtime.uptime',
+      description: 'Milliseconds since application was initialized',
+      type: 'Sum',
+    },
+    {
+      name: 'system.cpu.time',
+      description: 'System CPU time',
+      type: 'Sum',
+    },
+    {
+      name: 'system.cpu.utilization',
+      description: 'System CPU utilization',
+      type: 'Gauge',
+    },
+    {
+      name: 'system.disk.io',
+      description: 'System disk IO',
+      type: 'Sum',
+    },
+    {
+      name: 'system.disk.operations',
+      description: 'System disk operations',
+      type: 'Sum',
+    },
+    {
+      name: 'system.disk.time',
+      description: 'System disk time',
+      type: 'Sum',
+    },
+    {
+      name: 'system.memory.usage',
+      description: 'System memory usage',
+      type: 'Gauge',
+    },
+    {
+      name: 'system.memory.utilization',
+      description: 'System memory utilization',
+      type: 'Gauge',
+    },
+    {
+      name: 'system.network.connections',
+      description: 'System network connections',
+      type: 'Sum',
+    },
+    {
+      name: 'system.network.dropped_packets',
+      description: 'System network dropped_packets',
+      type: 'Sum',
+    },
+    {
+      name: 'system.network.errors',
+      description: 'System network errors',
+      type: 'Sum',
+    },
+    {
+      name: 'system.network.io',
+      description: 'System network io',
+      type: 'Sum',
+    },
+    {
+      name: 'system.network.packets',
+      description: 'System network packets',
+      type: 'Sum',
+    },
+    {
+      name: 'system.swap.usage',
+      description: 'System swap usage',
+      type: 'Gauge',
+    },
+    {
+      name: 'system.swap.utilization',
+      description: 'System swap utilization',
+      type: 'Gauge',
+    },
+    {
+      name: 'system.thread_count',
+      description: 'System active threads count',
+      type: 'Gauge',
+    },
+  ],
+};
+
 function reportErrorAndThrow(e) {
   logError(e);
   Sentry.captureException(e);
   throw e;
 }
+
+function mockReturnDataWithDelay(data) {
+  return new Promise((resolve) => {
+    setTimeout(() => resolve(data), 500);
+  });
+}
+
 // Provisioning API spec: https://gitlab.com/gitlab-org/opstrace/opstrace/-/blob/main/provisioning-api/pkg/provisioningapi/routes.go#L59
 async function enableObservability(provisioningUrl) {
   try {
-    // Note: axios.put(url, undefined, {withCredentials: true}) does not send cookies properly, so need to use the API below for the correct behaviour
-    return await axios(provisioningUrl, {
-      method: 'put',
-      withCredentials: true,
-    });
+    console.log('[DEBUG] Enabling Observability');
+    return mockReturnDataWithDelay();
   } catch (e) {
     return reportErrorAndThrow(e);
   }
@@ -24,11 +985,12 @@ async function enableObservability(provisioningUrl) {
 // Provisioning API spec: https://gitlab.com/gitlab-org/opstrace/opstrace/-/blob/main/provisioning-api/pkg/provisioningapi/routes.go#L37
 async function isObservabilityEnabled(provisioningUrl) {
   try {
-    const { data } = await axios.get(provisioningUrl, { withCredentials: true });
+    console.log('[DEBUG] Checking Observability Enabled');
+    const data = { status: 'ready' };
     if (data && data.status) {
       // we currently ignore the 'status' payload and just check if the request was successful
       // We might improve this as part of https://gitlab.com/gitlab-org/opstrace/opstrace/-/issues/2315
-      return true;
+      return mockReturnDataWithDelay(true);
     }
   } catch (e) {
     if (e.response.status === 404) {
@@ -40,19 +1002,119 @@ async function isObservabilityEnabled(provisioningUrl) {
 }
 
 async function fetchTrace(tracingUrl, traceId) {
-  try {
-    if (!traceId) {
-      throw new Error('traceId is required.');
-    }
-
-    const { data } = await axios.get(`${tracingUrl}/${traceId}`, {
-      withCredentials: true,
-    });
-
-    return data;
-  } catch (e) {
-    return reportErrorAndThrow(e);
-  }
+  console.log(`[DEBUG] Fetch trace ${traceId} from ${tracingUrl}`);
+  return mockReturnDataWithDelay({
+    timestamp: '2023-11-06T14:58:38.892999936Z',
+    trace_id: 'cfa0e008-002f-5505-0d05-31855d493ea0',
+    service_name: 'frontend',
+    operation: 'HTTP POST',
+    status_code: 'STATUS_CODE_UNSET',
+    duration_nano: 6870528,
+    spans: [
+      {
+        timestamp: '2023-11-06T14:58:38.892999936Z',
+        span_id: '86C2CAF54D03A839',
+        trace_id: 'cfa0e008-002f-5505-0d05-31855d493ea0',
+        service_name: 'frontend',
+        operation: 'HTTP POST',
+        duration_nano: 6870528,
+        parent_span_id: '',
+        status_code: 'STATUS_CODE_UNSET',
+        statusCode: 'STATUS_CODE_UNSET',
+      },
+      {
+        timestamp: '2023-11-06T14:58:38.792999900Z',
+        span_id: '5E95BA1D4DCA629C',
+        trace_id: 'cfa0e008-002f-5505-0d05-31855d493ea0',
+        service_name: 'frontend',
+        operation: 'grpc.oteldemo.CartService/AddItem',
+        duration_nano: 4674123,
+        parent_span_id: '86C2CAF54D03A839',
+        status_code: 'STATUS_CODE_UNSET',
+        statusCode: 'STATUS_CODE_UNSET',
+      },
+      {
+        timestamp: '2023-11-06T14:58:38.897313Z',
+        span_id: '79A1A33CCC36DC44',
+        trace_id: 'cfa0e008-002f-5505-0d05-31855d493ea0',
+        service_name: 'cartservice',
+        operation: 'oteldemo.CartService/AddItem',
+        duration_nano: 1138200,
+        parent_span_id: '5E95BA1D4DCA629C',
+        status_code: 'STATUS_CODE_UNSET',
+        statusCode: 'STATUS_CODE_UNSET',
+      },
+      {
+        timestamp: '2023-11-06T14:58:38.8974467Z',
+        span_id: 'B43E6CFFD9AF4A68',
+        trace_id: 'cfa0e008-002f-5505-0d05-31855d493ea0',
+        service_name: 'cartservice',
+        operation: 'HGET',
+        duration_nano: 360700,
+        parent_span_id: '79A1A33CCC36DC44',
+        status_code: 'STATUS_CODE_UNSET',
+        statusCode: 'STATUS_CODE_UNSET',
+      },
+      {
+        timestamp: '2023-11-06T14:58:38.8978547Z',
+        span_id: '80169B2C61AF41EF',
+        trace_id: 'cfa0e008-002f-5505-0d05-31855d493ea0',
+        service_name: 'cartservice',
+        operation: 'HMSET',
+        duration_nano: 249500,
+        parent_span_id: '79A1A33CCC36DC44',
+        status_code: 'STATUS_CODE_UNSET',
+        statusCode: 'STATUS_CODE_UNSET',
+      },
+      {
+        timestamp: '2023-11-06T14:58:38.897999872Z',
+        span_id: '6C4E28FE982F2F73',
+        trace_id: 'cfa0e008-002f-5505-0d05-31855d493ea0',
+        service_name: 'frontend',
+        operation: 'grpc.oteldemo.CartService/GetCart',
+        duration_nano: 1346816,
+        parent_span_id: '86C2CAF54D03A839',
+        status_code: 'STATUS_CODE_UNSET',
+        statusCode: 'STATUS_CODE_UNSET',
+      },
+      {
+        timestamp: '2023-11-06T14:58:38.8981128Z',
+        span_id: '427F06B0B498A482',
+        trace_id: 'cfa0e008-002f-5505-0d05-31855d493ea0',
+        service_name: 'cartservice',
+        operation: 'EXPIRE',
+        duration_nano: 252200,
+        parent_span_id: '79A1A33CCC36DC44',
+        status_code: 'STATUS_CODE_UNSET',
+        statusCode: 'STATUS_CODE_UNSET',
+      },
+      {
+        timestamp: '2023-11-06T14:58:38.8995004Z',
+        span_id: 'FF45FE0F8C45FD68',
+        trace_id: 'cfa0e008-002f-5505-0d05-31855d493ea0',
+        service_name: 'cartservice',
+        operation: 'oteldemo.CartService/GetCart',
+        duration_nano: 512400,
+        parent_span_id: '6C4E28FE982F2F73',
+        status_code: 'STATUS_CODE_UNSET',
+        statusCode: 'STATUS_CODE_UNSET',
+      },
+      {
+        timestamp: '2023-11-06T14:58:38.8996313Z',
+        span_id: 'F6D0D268E8A84A38',
+        trace_id: 'cfa0e008-002f-5505-0d05-31855d493ea0',
+        service_name: 'cartservice',
+        operation: 'HGET',
+        duration_nano: 290700,
+        parent_span_id: 'FF45FE0F8C45FD68',
+        status_code: 'STATUS_CODE_UNSET',
+        statusCode: 'STATUS_CODE_UNSET',
+      },
+    ],
+    total_spans: 9,
+    totalSpans: 9,
+    statusCode: 'STATUS_CODE_UNSET',
+  });
 }
 
 /**
@@ -198,15 +1260,15 @@ async function fetchTraces(tracingUrl, { filters = {}, pageToken, pageSize, sort
     : DEFAULT_SORTING_OPTION;
   params.append('sort', sortOrder);
 
+  console.log(`[DEBUG] Fetching traces with params: ${params.toString()}`);
+
   try {
-    const { data } = await axios.get(tracingUrl, {
-      withCredentials: true,
-      params,
-    });
+    const data = MOCK_TRACES;
+
     if (!Array.isArray(data.traces)) {
       throw new Error('traces are missing/invalid in the response'); // eslint-disable-line @gitlab/require-i18n-strings
     }
-    return data;
+    return mockReturnDataWithDelay(data);
   } catch (e) {
     return reportErrorAndThrow(e);
   }
@@ -214,15 +1276,17 @@ async function fetchTraces(tracingUrl, { filters = {}, pageToken, pageSize, sort
 
 async function fetchServices(servicesUrl) {
   try {
-    const { data } = await axios.get(servicesUrl, {
-      withCredentials: true,
-    });
+    console.log(`[DEBUG] Fetching services from ${servicesUrl}`);
+    const uniqueServices = new Set(
+      MOCK_TRACES.traces.map((t) => t.spans.map((s) => s.service_name)).flat(),
+    );
+    const data = { services: Array.from(uniqueServices).map((s) => ({ name: s })) };
 
     if (!Array.isArray(data.services)) {
       throw new Error('failed to fetch services. invalid response'); // eslint-disable-line @gitlab/require-i18n-strings
     }
 
-    return data.services;
+    return mockReturnDataWithDelay(data.services);
   } catch (e) {
     return reportErrorAndThrow(e);
   }
@@ -237,25 +1301,32 @@ async function fetchOperations(operationsUrl, serviceName) {
       throw new Error('fetchOperations() - operationsUrl must contain $SERVICE_NAME$');
     }
     const url = operationsUrl.replace('$SERVICE_NAME$', serviceName);
-    const { data } = await axios.get(url, {
-      withCredentials: true,
-    });
+
+    console.log('fetching operations suggestions from', url); // eslint-disable-line @gitlab/require-i18n-strings
+    const uniqOps = new Set(
+      MOCK_TRACES.traces
+        .map((t) => t.spans.filter((s) => s.service_name === serviceName))
+        .flat()
+        .map((s) => s.operation),
+    );
+    const data = { operations: Array.from(uniqOps).map((s) => ({ name: s })) };
 
     if (!Array.isArray(data.operations)) {
       throw new Error('failed to fetch operations. invalid response'); // eslint-disable-line @gitlab/require-i18n-strings
     }
 
-    return data.operations;
+    return mockReturnDataWithDelay(data.operations);
   } catch (e) {
     return reportErrorAndThrow(e);
   }
 }
 
 async function fetchMetrics(metricsUrl) {
+  console.log(`[DEBUG] Fetching metrics from ${metricsUrl}`);
+
   try {
-    const { data } = await axios.get(metricsUrl, {
-      withCredentials: true,
-    });
+    const data = MOCK_METRICS;
+
     if (!Array.isArray(data.metrics)) {
       throw new Error('metrics are missing/invalid in the response'); // eslint-disable-line @gitlab/require-i18n-strings
     }
diff --git a/app/assets/javascripts/observability/components/observability_container.vue b/app/assets/javascripts/observability/components/observability_container.vue
index b89c2624f81c..f6cbf7ee771f 100644
--- a/app/assets/javascripts/observability/components/observability_container.vue
+++ b/app/assets/javascripts/observability/components/observability_container.vue
@@ -27,12 +27,12 @@ export default {
 
     // TODO: Improve local GDK dev experience with tracing https://gitlab.com/gitlab-org/opstrace/opstrace/-/issues/2308
     // Uncomment the lines below to to test this locally
-    // setTimeout(() => {
-    //   this.messageHandler({
-    //     data: { type: 'AUTH_COMPLETION', status: 'success' },
-    //     origin: new URL(this.oauthUrl).origin,
-    //   });
-    // }, 2000);
+    setTimeout(() => {
+      this.messageHandler({
+        data: { type: 'AUTH_COMPLETION', status: 'success' },
+        origin: new URL(this.apiConfig.oauthUrl).origin,
+      });
+    }, 2000);
   },
   destroyed() {
     window.removeEventListener('message', this.messageHandler);

MR acceptance checklist

This checklist encourages us to confirm any changes have been analyzed to reduce risks in quality, performance, reliability, security, and maintainability.

Edited by Daniele Rossetti

Merge request reports