No data found in Enviroment/Metrics with Group-level Kubernetes and Prometheus
Summary
I have multiple projects in one group (server/webui/stable-env) with group-level kubernetes and manually installed prometheus. Installed by helm to the same kubernetes cluster. I configure ingress as prometheus.mydomain.com and use this url for each project within Settings/Integrations/Prometheus When I navigate to Enviroments/Metrics for first two projects I have "No data found" screen. Third projects show me metrics!
Prometheus has data for first two projects. I checked.
Steps to reproduce
- Configure group-level kubernetes
- Install Tiller, Ingress, Certmanager by gitlab
- Build and deploy environmets for each projects (in my case I have staging and review environments for server and webui projects and production enviroment for stable-env)
- Manually install prometheus and enable stats in nginx-ingress
- Configure prometheus integration for each project in Settings/Integrations/Prometheus
- Navigate Enviroment/Metrics for each project
Script used to install prometheus and configure nginx-ingress
``` PROMETHEUS_FQDN=prometheus.mydomain.com cat >/tmp/prometheus-values.yml << EOF alertmanager: enabled: false kubeStateMetrics: enabled: true pushgateway: enabled: false rbac: create: true enabled: true server: ingress: enabled: true annotations: kubernetes.io/ingress.class: nginx kubernetes.io/tls-acme: 'true' hosts: - $PROMETHEUS_FQDN tls: - secretName: prometheus-server-tls hosts: - $PROMETHEUS_FQDN serverFiles: alerts: {} prometheus.yml: rule_files: - /etc/config/rules - /etc/config/alerts scrape_configs: - job_name: prometheus scheme: http static_configs: - targets: - localhost:9090 - job_name: kubernetes-apiservers bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs: - role: endpoints relabel_configs: - action: keep regex: default;kubernetes;https source_labels: - __meta_kubernetes_namespace - __meta_kubernetes_service_name - __meta_kubernetes_endpoint_port_name scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: true - job_name: kubernetes-nodes bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs: - role: node metric_relabel_configs: - regex: (.+)-.+-.+ source_labels: - pod_name target_label: environment relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - replacement: kubernetes.default.svc:443 target_label: __address__ - regex: (.+) replacement: /api/v1/nodes/\$1/proxy/metrics source_labels: - __meta_kubernetes_node_name target_label: __metrics_path__ scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: true - job_name: kubernetes-nodes-cadvisor bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs: - role: node metric_relabel_configs: - regex: (.+)-.+-.+ source_labels: - pod_name target_label: environment relabel_configs: - regex: __meta_kubernetes_node_label_(.+) action: labelmap - target_label: __address__ replacement: kubernetes.default.svc:443 - source_labels: [__meta_kubernetes_node_name] regex: (.+) target_label: __metrics_path__ replacement: /api/v1/nodes/\$1/proxy/metrics/cadvisor scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: true - job_name: kubernetes-service-endpoints scheme: http kubernetes_sd_configs: - role: endpoints relabel_configs: - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] regex: true action: keep - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] regex: (https?) target_label: __scheme__ action: replace - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] regex: (.+) target_label: __metrics_path__ action: replace - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] regex: ([^:]+)(?::\d+)?;(\d+) target_label: __address__ replacement: \$1:\$2 action: replace - regex: __meta_kubernetes_service_label_(.+) action: labelmap - source_labels: [__meta_kubernetes_namespace] target_label: kubernetes_namespace action: replace - source_labels: [__meta_kubernetes_service_name] target_label: kubernetes_name action: replace - source_labels: [__meta_kubernetes_pod_node_name] target_label: kubernetes_node action: replace - job_name: prometheus-pushgateway honor_labels: true kubernetes_sd_configs: - role: service relabel_configs: - action: keep regex: pushgateway source_labels: - __meta_kubernetes_service_annotation_prometheus_io_probe - job_name: kubernetes-services kubernetes_sd_configs: - role: service metrics_path: /probe params: module: - http_2xx relabel_configs: - action: keep regex: true source_labels: - __meta_kubernetes_service_annotation_prometheus_io_probe - source_labels: - __address__ target_label: __param_target - replacement: blackbox target_label: __address__ - source_labels: - __param_target target_label: instance - action: labelmap regex: __meta_kubernetes_service_label_(.+) - source_labels: - __meta_kubernetes_namespace target_label: kubernetes_namespace - source_labels: - __meta_kubernetes_service_name target_label: kubernetes_name - job_name: kubernetes-pods kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] regex: true action: keep - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] regex: (.+) target_label: __metrics_path__ action: replace - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] regex: ([^:]+)(?::\d+)?;(\d+) target_label: __address__ replacement: \$1:\$2 action: replace - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - source_labels: [__meta_kubernetes_namespace] target_label: kubernetes_namespace action: replace - source_labels: [__meta_kubernetes_pod_name] target_label: kubernetes_pod_name action: replace - job_name: autoscaler kubernetes_sd_configs: - role: pod relabel_configs: - action: keep regex: knative-serving;autoscaler;metrics source_labels: - __meta_kubernetes_namespace - __meta_kubernetes_pod_label_app - __meta_kubernetes_pod_container_port_name - source_labels: - __meta_kubernetes_namespace target_label: namespace - source_labels: - __meta_kubernetes_pod_name target_label: pod - source_labels: - __meta_kubernetes_service_name target_label: service scrape_interval: 3s scrape_timeout: 3s - job_name: activator kubernetes_sd_configs: - role: pod relabel_configs: - action: keep regex: knative-serving;activator;metrics-port source_labels: - __meta_kubernetes_namespace - __meta_kubernetes_pod_label_app - __meta_kubernetes_pod_container_port_name - source_labels: - __meta_kubernetes_namespace target_label: namespace - source_labels: - __meta_kubernetes_pod_name target_label: pod - source_labels: - __meta_kubernetes_service_name target_label: service scrape_interval: 3s scrape_timeout: 3s - job_name: istio-mesh kubernetes_sd_configs: - role: endpoints relabel_configs: - action: keep regex: istio-system;istio-telemetry;prometheus source_labels: - __meta_kubernetes_namespace - __meta_kubernetes_service_name - __meta_kubernetes_endpoint_port_name - source_labels: - __meta_kubernetes_namespace target_label: namespace - source_labels: - __meta_kubernetes_pod_name target_label: pod - source_labels: - __meta_kubernetes_service_name target_label: service scrape_interval: 5s - job_name: istio-policy kubernetes_sd_configs: - role: endpoints relabel_configs: - action: keep regex: istio-system;istio-policy;http-monitoring source_labels: - __meta_kubernetes_namespace - __meta_kubernetes_service_name - __meta_kubernetes_endpoint_port_name - source_labels: - __meta_kubernetes_namespace target_label: namespace - source_labels: - __meta_kubernetes_pod_name target_label: pod - source_labels: - __meta_kubernetes_service_name target_label: service scrape_interval: 5s - job_name: istio-telemetry kubernetes_sd_configs: - role: endpoints relabel_configs: - action: keep regex: istio-system;istio-telemetry;http-monitoring source_labels: - __meta_kubernetes_namespace - __meta_kubernetes_service_name - __meta_kubernetes_endpoint_port_name - source_labels: - __meta_kubernetes_namespace target_label: namespace - source_labels: - __meta_kubernetes_pod_name target_label: pod - source_labels: - __meta_kubernetes_service_name target_label: service scrape_interval: 5s - job_name: istio-pilot kubernetes_sd_configs: - role: endpoints relabel_configs: - action: keep regex: istio-system;istio-pilot;http-monitoring source_labels: - __meta_kubernetes_namespace - __meta_kubernetes_service_name - __meta_kubernetes_endpoint_port_name - source_labels: - __meta_kubernetes_namespace target_label: namespace - source_labels: - __meta_kubernetes_pod_name target_label: pod - source_labels: - __meta_kubernetes_service_name target_label: service scrape_interval: 5s rules: {} EOFcat >/tmp/ingress-values.yml <<EOF controller: podAnnotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" stats: enabled: true EOF
helm upgrade prometheus stable/prometheus --install --namespace gitlab-managed-apps -f /tmp/prometheus-values.yml
helm upgrade ingress stable/nginx-ingress --install --namespace gitlab-managed-apps --reuse-values -f /tmp/ingress-values.yml
What is the current bug behavior?
Two of three projects show "No data found" on Enviroments/Metrics page
What is the expected correct behavior?
Each project show graphs with metrics on Enviroments/Metrics page
Relevant logs and/or screenshots
I have logs from nginx-ingress with requests to prometheus from gitlab and we can see that gitlab create wrong query. It use kube_namespace of third project everytime.
For example this request:
sum(rate(nginx_upstream_responses_total{upstream=~"%{kube_namespace}-%{ci_environment_slug}-.*"}[2m])) by (status_code)
Project: server; Enviroment: staging; kube_namespace: server-257
10.100.17.0 - [10.100.17.0] - - [04/Apr/2019:14:11:55 +0000] "GET /api/v1/query_range?query=sum%28rate%28nginx_upstream_responses_total%7Bupstream%3D%7E%22stable-env-261-staging-.*%22%7D%5B2m%5D%29%29+by+%28status_code%29&start=1554358318.9819744&end=1554387118.982065&step=60 HTTP/1.1" 200 87 "-" "rest-client/2.0.2 (linux-gnu x86_64) ruby/2.5.3p105" 368 0.002 [gitlab-managed-apps-prometheus-server-80] 10.100.98.6:9090 87 0.002 200
Project: webui; Enviroment: staging; kube_namespace: webui-258
10.100.83.0 - [10.100.83.0] - - [04/Apr/2019:14:19:24 +0000] "GET /api/v1/query_range?query=sum%28rate%28nginx_upstream_responses_total%7Bupstream%3D%7E%22stable-env-261-staging-.*%22%7D%5B2m%5D%29%29+by+%28status_code%29&start=1554358768.4918473&end=1554387568.49194&step=60 HTTP/1.1" 200 87 "-" "rest-client/2.0.2 (linux-gnu x86_64) ruby/2.5.3p105" 367 0.002 [gitlab-managed-apps-prometheus-server-80] 10.100.98.6:9090 87 0.002 200
Project: stable-env; Enviroment: production; kube_namespace: stable-env-261
10.100.83.0 - [10.100.83.0] - - [04/Apr/2019:14:23:29 +0000] "GET /api/v1/query_range?query=sum%28rate%28nginx_upstream_responses_total%7Bupstream%3D%7E%22stable-env-261-production-.*%22%7D%5B2m%5D%29%29+by+%28status_code%29&start=1554359012.7282782&end=1554387812.728358&step=60 HTTP/1.1" 200 87 "-" "rest-client/2.0.2 (linux-gnu x86_64) ruby/2.5.3p105" 371 0.002 [gitlab-managed-apps-prometheus-server-80] 10.100.98.6:9090 87 0.002 200
Output of checks
Results of GitLab environment info
Expand for output related to GitLab environment info
System information System: Ubuntu 16.04 Current User: git Using RVM: no Ruby Version: 2.5.3p105 Gem Version: 2.7.6 Bundler Version:1.16.6 Rake Version: 12.3.2 Redis Version: 3.2.12 Git Version: 2.18.1 Sidekiq Version:5.2.5 Go Version: unknown
GitLab information Version: 11.9.4 Revision: 4733950 Directory: /opt/gitlab/embedded/service/gitlab-rails DB Adapter: postgresql URL: https://gitlab.cl.nat.kz HTTP Clone URL: https://gitlab.cl.nat.kz/some-group/some-project.git SSH Clone URL: git@gitlab.cl.nat.kz:some-group/some-project.git Using LDAP: yes Using Omniauth: yes Omniauth Providers: oauth2_generic
GitLab Shell Version: 8.7.1 Repository storage paths:
- default: /var/opt/gitlab/git-data/repositories GitLab Shell path: /opt/gitlab/embedded/service/gitlab-shell Git: /opt/gitlab/embedded/bin/git
Results of GitLab application Check
Expand for output related to the GitLab application check
Checking GitLab subtasks ...
Checking GitLab Shell ...
GitLab Shell: ... GitLab Shell version >= 8.7.1 ? ... OK (8.7.1) Running /opt/gitlab/embedded/service/gitlab-shell/bin/check Check GitLab API access: OK Redis available via internal API: OK
Access to /var/opt/gitlab/.ssh/authorized_keys: OK gitlab-shell self-check successful
Checking GitLab Shell ... Finished
Checking Gitaly ...
Gitaly: ... default ... OK
Checking Gitaly ... Finished
Checking Sidekiq ...
Sidekiq: ... Running? ... yes Number of Sidekiq processes ... 1
Checking Sidekiq ... Finished
Checking Incoming Email ...
Incoming Email: ... Reply by email is disabled in config/gitlab.yml
Checking Incoming Email ... Finished
Checking LDAP ...
LDAP: ... Server: ldapmain LDAP authentication... Success LDAP users with access to your GitLab server (only showing the first 100 results) ... Checking LDAP ... Finished
Checking GitLab App ...
Git configured correctly? ... yes Database config exists? ... yes All migrations up? ... yes Database contains orphaned GroupMembers? ... no GitLab config exists? ... yes GitLab config up to date? ... yes Log directory writable? ... yes Tmp directory writable? ... yes Uploads directory exists? ... yes Uploads directory has correct permissions? ... yes Uploads directory tmp has correct permissions? ... yes Init script exists? ... skipped (omnibus-gitlab has no init script) Init script up-to-date? ... skipped (omnibus-gitlab has no init script) Projects have namespace: ... ... Redis version >= 2.8.0? ... yes Ruby version >= 2.3.5 ? ... yes (2.5.3) Git version >= 2.18.0 ? ... yes (2.18.1) Git user has default SSH configuration? ... yes Active users: ... 66
Checking GitLab App ... Finished
Checking GitLab subtasks ... Finished