Logs not being sent to Loki from Fluentd in Sylva 1.1.1

On a capo RKE2 deployment with 3 controller nodes on the management cluster with logging, loki, neuvector and gitea units enabled we have error indicating that Fluentd was not able to sent Logs to Loki.

Pod logs: cattle-logging-system/logging-root-fluentd-0 :

2024-08-12 12:29:26 +0000 [warn]: #0 [clusterflow:cattle-logging-system:all-logs:clusteroutput:cattle-logging-system:loki] failed to write post to http://loki-gateway.loki.svc.cluster.local/loki/api/v1/push (4<head><title>413 Request Entity Too Large</title></head>
<body>
<center><h1>413 Request Entity Too Large</h1></center>
<hr><center>nginx/1.23.4</center>
</body>
</html>
)

Looking at the logs from Loki, I was able to see that the "body size" was about 4.5MB.

Pod logs: loki/loki-gateway-...-...

2024/08/12 12:26:26 [error] 10#10: *57905 client intended to send too large body: 4477557 bytes, client: 100.72.162.254, server: , request: "POST /loki/api/v1/push HTTP/1.1", host: "loki-gateway.loki.svc.cluster.local"  

Adapting the configmap for the Loki gateway was necessary. The http section of the configuration map needed to be modified to the following line:

configmap: loki/loki-gateway

http { 
  ...
  client_max_body_size 5M; #previously it was "4M"
  ...
}

This change was enough to allow successful forwarding of logs from Fluentd to Loki for this specific deployment.

It's worth mentioning that we also encountered a body size exceeding 10MB on another deployment.

I think we could set up the client_max_body_size to 50M without problem.

Edited Aug 13, 2024 by Benjamin Le Diguerher
Assignee Loading
Time tracking Loading