perf: Add resource limits and rename self-monitoring components

Summary

Add CPU and memory resource limits to all Docker Compose services for predictable resource usage and prevention of runaway containers.

Target environment: 4 vCPU / 8 GiB RAM host
Total allocation: ~5.1 vCPUs, ~7.6 GiB RAM (with headroom for host OS)

Changes

  • Resource limits for all services (CPU + memory caps)
  • Renamed self-monitoring components with self- prefix for clarity:
    • cadvisorself-cadvisor
    • node-exporterself-node-exporter
    • postgres-exporterself-postgres-exporter
  • Updated prometheus config, Grafana dashboard queries, and CLI accordingly

Resource Allocation Table

Service CPU Memory Notes
pgwatch-prometheus 1.5 1 GiB High CPU for pg_stat_statements load (observed 150%+)
postgres-reports 1.0 1.75 GiB Runs periodically, not continuously
sink-prometheus 0.75 1.5 GiB VictoriaMetrics time-series storage
grafana 0.5 512 MiB Dashboard UI
sink-postgres 0.4 1 GiB Metrics storage (pgwatch postgres sink)
pgwatch-postgres 0.35 512 MiB Secondary pgwatch instance
target-db 0.2 768 MiB Demo database (not needed in production)
self-cadvisor 0.15 192 MiB Container metrics
flask-backend 0.1 192 MiB API backend
self-postgres-exporter 0.1 128 MiB sink-postgres metrics
self-node-exporter 0.05 96 MiB System metrics (skipped on macOS)

Totals: ~5.1 vCPUs, ~7.6 GiB RAM

Test Results

Tested on macOS with Docker:

NAME                       CPU %     MEM USAGE / LIMIT   MEM %
flask-pgss-api             13.61%    55.32MiB / 192MiB   28.81%
grafana-with-datasources   66.86%    283MiB / 512MiB     55.26%
pgwatch-postgres           0.18%     13.4MiB / 512MiB    2.62%
pgwatch-prometheus         0.00%     20.7MiB / 1GiB      2.02%
postgres-reports           0.00%     940KiB / 1.75GiB    0.05%
self-cadvisor              13.76%    50.27MiB / 192MiB   26.18%
self-postgres-exporter     0.00%     18.98MiB / 128MiB   14.83%
sink-postgres              0.32%     41.74MiB / 1GiB     4.08%
sink-prometheus            12.24%    52.69MiB / 1.5GiB   3.43%
target-db                  27.44%    49.28MiB / 768MiB   6.42%

Total idle memory usage: ~590 MiB (containers well within limits)

Prometheus Targets Status

victoriametrics: up
pgwatch-prometheus: up
self-cadvisor: up
self-node-exporter: down (expected - scaled to 0 on macOS)
self-postgres-exporter: up

Test Plan

  • All containers start successfully
  • Containers stay within memory limits
  • Self-monitoring components renamed correctly
  • Prometheus scrape targets updated and healthy
  • Grafana dashboard queries updated for new job names
  • Production testing under load (recommended)

Closes #43

Edited by Dementii Priadko

Merge request reports

Loading