Logging agents can split logs by type
The plan is to use a DaemonSet
of log collectors based on Elastic's FileBeat agents.
Given that we are getting the docker
logs from each of the k8s
nodes, the configuration for the ConfigMap
is the following:
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
And sample log messages that we are generating are:
{"@timestamp":"2018-01-18T13:29:01.837Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.1.1","topic":"gitlab"},"kubernetes":{"labels":{"app":"nginx","controller-revision-hash":"4060033508","pod-template-generation":"1","release":"jan12-demo-helm-charts-win"},"container":{"name":"nginx"},"pod":{"name":"jan12-demo-helm-charts-win-nginx-299d2"},"namespace":"default"},"beat":{"hostname":"filebeat-n2jc6","version":"6.1.1","name":"filebeat-n2jc6"},"source":"/var/lib/docker/containers/de32371feda92566803eca549d4f3e52da9faf5994d7202e905b33bea676e145/de32371feda92566803eca549d4f3e52da9faf5994d7202e905b33bea676e145-json.log","offset":21672,"stream":"stderr","message":"W0118 13:29:01.837348 7 controller.go:1054] ssl certificate default/gitlab-jan12-demo-helm-charts-win-tls does not contain a Common Name or Subject Alternative Name for host registry.helm-charts.win","prospector":{"type":"docker"}}
{"@timestamp":"2018-01-18T13:29:04.558Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.1.1","topic":"gitlab"},"source":"/var/lib/docker/containers/7a29b7df74692241d6cb6aa8294eef46436cf24413d7356a6b7dcd9ffbfc2e1c/7a29b7df74692241d6cb6aa8294eef46436cf24413d7356a6b7dcd9ffbfc2e1c-json.log","offset":63917,"stream":"stdout","message":"2018-01-18 13:29:04,558 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:40350","prospector":{"type":"docker"},"kubernetes":{"namespace":"kafka","labels":{"app":"zookeeper","controller-revision-hash":"kafka-zookeeper-558cc8fb5d","release":"kafka"},"container":{"name":"k8szk"},"pod":{"name":"kafka-zookeeper-2"}},"beat":{"name":"filebeat-n2jc6","hostname":"filebeat-n2jc6","version":"6.1.1"}}
{"@timestamp":"2018-01-18T13:29:04.560Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.1.1","topic":"gitlab"},"prospector":{"type":"docker"},"kubernetes":{"pod":{"name":"kafka-zookeeper-2"},"namespace":"kafka","labels":{"app":"zookeeper","controller-revision-hash":"kafka-zookeeper-558cc8fb5d","release":"kafka"},"container":{"name":"k8szk"}},"beat":{"name":"filebeat-n2jc6","hostname":"filebeat-n2jc6","version":"6.1.1"},"offset":64138,"stream":"stdout","message":"2018-01-18 13:29:04,560 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:40350","source":"/var/lib/docker/containers/7a29b7df74692241d6cb6aa8294eef46436cf24413d7356a6b7dcd9ffbfc2e1c/7a29b7df74692241d6cb6aa8294eef46436cf24413d7356a6b7dcd9ffbfc2e1c-json.log"}
{"@timestamp":"2018-01-18T13:29:04.561Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.1.1","topic":"gitlab"},"stream":"stdout","message":"2018-01-18 13:29:04,561 [myid:3] - INFO [Thread-29:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:40350 (no session established for client)","prospector":{"type":"docker"},"kubernetes":{"pod":{"name":"kafka-zookeeper-2"},"namespace":"kafka","labels":{"release":"kafka","app":"zookeeper","controller-revision-hash":"kafka-zookeeper-558cc8fb5d"},"container":{"name":"k8szk"}},"beat":{"name":"filebeat-n2jc6","hostname":"filebeat-n2jc6","version":"6.1.1"},"source":"/var/lib/docker/containers/7a29b7df74692241d6cb6aa8294eef46436cf24413d7356a6b7dcd9ffbfc2e1c/7a29b7df74692241d6cb6aa8294eef46436cf24413d7356a6b7dcd9ffbfc2e1c-json.log","offset":64371}
{"@timestamp":"2018-01-18T13:29:05.526Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.1.1","topic":"gitlab"},"beat":{"name":"filebeat-dm7ll","hostname":"filebeat-dm7ll","version":"6.1.1"},"source":"/var/lib/docker/containers/6298b013876e75446e75648e53060258f620cc01cc5339076222bc61ff623012/6298b013876e75446e75648e53060258f620cc01cc5339076222bc61ff623012-json.log","offset":2736763,"stream":"stdout","message":"2018-01-18 13:29:05,525 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:38308","prospector":{"type":"docker"},"kubernetes":{"labels":{"app":"zookeeper","controller-revision-hash":"kafka-zookeeper-558cc8fb5d","release":"kafka"},"container":{"name":"k8szk"},"pod":{"name":"kafka-zookeeper-1"},"namespace":"kafka"}}
But based on the current configuration, we are sending every single message to the same topic in Kafka and we need to make sure we are able to separate logs based on some condition (my first guess is having a separate Kafka topic per app
label in k8s
)