Newer
Older
[Promtail](https://grafana.com/docs/loki/latest/clients/promtail/) is an efficient log shipping agent. It works very fine with Grafana [Loki](https://grafana.com/docs/loki/latest/) on a Kubernetes environment, scraping containers logs with auto-discovery and labels features.
It is very simple to install it as a Daemonset on a Kubernetes cluster, and get containers logs. But sometimes we also need to get logs from the nodes where the containers runs on.
*Note*: This article uses Promtail v2.5.0, some configuration should differ when you are reading this article.
By default, we install the promtail using the helm chart, and it will work out of the box.
We assume we already have a working Loki and [Grafana](https://grafana.com/docs/grafana/latest/) installation.
```shell
$ helm repo add grafana https://grafana.github.io/helm-charts
$ helm repo update
$ helm --install --namespace observability --create-namespace promtail grafana/promtail
Now we can see that Promtail is running on nodes, and logs getting to Loki. Grafana helps us to explore the logs. Fine!
```shell
$ kubectl get pods -l app.kubernetes.io/name=promtail
NAME READY STATUS RESTARTS AGE
promtail-47jwn 1/1 Running 0 13m
promtail-mwkcb 1/1 Running 0 14m
That's not enough for our needs, we have to forward ̀`/var/log/syslog` host file to loki. Why not using these daemonsets? Let's configure promtail to get them!
First of all, we need to mount the host folder `/var/log` on the pod. We will mount it to `/var/log/host` to avoid mistakes.
We specify custom parameters using `custom-values.yaml` file for the helm chart:
Upgrade the helm chart, and see that the `/var/log/host` folder is here, great!
```shell
$ helm upgrade --install --namespace observability promtail grafana/promtail -f values.yaml
$ kubectl exec -it promtail-mwkcb -- mount | grep /var/log/host
/dev/sda1 on /var/log/host type ext4 (ro,relatime,data=ordered)
```
$ kubectl exec -it promtail-mwkcb -- tail /var/log/host/syslog
tail: cannot open '/var/log/host/syslog' for reading: Permission denied
command terminated with exit code 1
```
Oh, There is something wrong here! Let's check file permissions:
```shell
$ kubectl exec -it promtail-mwkcb -- ls -l /var/log/host/syslog
-rw-r----- 1 102 adm 72327736 Jun 29 20:01 /var/log/host/syslog
```
This file is owned by user `102`, and group `adm`. Using `getent` shows us the group number:
```shell
$ kubectl exec -it promtail-mwkcb -- getent group adm
adm:x:4:
```
So, If we want promtail pod to read that file, we'll have to set some security context using `fsGroup` on our helm custom values file:
```yaml
# Set fsGroup to allow syslog file reading
podSecurityContext:
fsGroup: 4
```
$ helm upgrade --install --namespace observability promtail grafana/promtail -f values.yaml
$ kubectl exec -it promtail-2dbh9 -- tail /var/log/host/syslog
Jun 29 20:16:28 worker-pool-node-d5353e kubelet[1596]: I0629 20:16:28.374480 1596 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
Jun 29 20:16:28 worker-pool-node-d5353e docker[1888]: I0629 20:16:28.375267 1 utils.go:81] GRPC call: /csi.v1.Node/NodeGetVolumeStats
```
That seems better! We can now read our node files from the promtail pod.
Now, we have to configure promtail to scrape this new file, using the simple [file target discovery](https://grafana.com/docs/loki/latest/clients/promtail/scraping/#file-target-discovery). We'll add some label to easily find the logs in loki:
```yaml
# Scrape config to read syslog file from node
config:
snippets:
extraScrapeConfigs: |
# Add an additional scrape config for syslog
Upgrade again the helm chart, and now we have my node syslog file going to loki, as all our containers logs, great!
We now have our logs, but we would like to have some dynamic extra labels, like the node hostname.
So, we're using the `-config.expand-env` argument value to get `ENV`values, and add the one we want to the promtail configuration:
```yaml
# Allow environment variables usage
extraArgs:
- -config.expand-env=true
# Scrape config to read syslog file from node
config:
snippets:
extraScrapeConfigs: |
# Add an additional scrape config for syslog
- job_name: node-syslog
static_configs:
- targets:
- localhost
labels:
job: node/syslog
__path__: /var/log/host/syslog
node_name: '${HOSTNAME}'
```
Upgrade my helm chart again, check on Grafana logs... Et voilà! We have all our nodes logs, with the labels we specified!
To sum up what we've done, and get nodes logs to loki using promtail:
1. Mount `/var/log` from the node to the pod
1. Use the correct `fsGroup`
1. Add `static_configs` scraping configuration
1. Use environment variables to add labels
You can then see here the complete [custom values file](custom-values.yaml).
Now you can deep dive into your logs, and check what's wrong with your nodes... Enjoy!