pgwatch generates excessive INFO logs (~80% of namespace log volume)
Problem
The pgwatch container in postgres-ai-mon-production is generating ~80% of all logs in the namespace due to verbose INFO-level logging.
Evidence
$ gcloud logging read 'resource.type="k8s_container" AND resource.labels.namespace_name="postgres-ai-mon-production"' \
--limit=10000 --format='value(resource.labels.container_name)' | sort | uniq -c | sort -nr
8138 pgwatch # 81% of all logs
1509 grafana
306 postgres-exporter
47 victoriametrics
Root Cause
pgwatch logs every metric fetch at INFO level:
[INFO] [source:prod_replica_postgres_ai] [metric:replication] [rows:0] measurements fetched
[INFO] [source:prod_replica_postgres_ai] [metric:pg_stat_replication] [rows:0] measurements fetched
[INFO] [source:prod_replica_postgres_ai] [metric:pg_blocked] [rows:0] measurements fetched
...
With 4 database sources × ~25 metrics × 30-second intervals = ~12,000 log lines per hour of noise.
Solution
Add --log-level=warn to pgwatch services to suppress INFO-level logs while keeping warnings and errors.
Impact
- Reduced log storage costs in GCP Logging
- Easier to find actual issues in logs
- No impact on metrics collection or monitoring functionality