1. 10 Dec, 2021 2 commits
  2. 01 Dec, 2021 2 commits
  3. 29 Nov, 2021 2 commits
  4. 23 Nov, 2021 2 commits
  5. 22 Nov, 2021 1 commit
  6. 18 Nov, 2021 1 commit
    • Matt Smiley's avatar
      Remove config setting "total_limit_size" for google_cloud plugin · 6e48b30d
      Matt Smiley authored
      Revert !246
      which added support for configuring "total_limit_size".
      
      Instead we will use !254
      which added support for configuring "buffer_queue_limit".
      
      Background:
      
      Setting "total_limit_size" was ineffective because it gets superceded by
      "buffer_queue_limit" (a.k.a. "queue_limit_length").  These are all alternative
      ways of specifying the same thing: the max size of the output queue of log records
      to send to the output destination (GCP Stackdriver).  Setting "buffer_queue_limit"
      works because it has higher precendence and overrides the default.
      
      For reference, there are 3 ways to specify the max size of the queue of buffered log records
      that fluentd is trying to send to its output destination:
      * "total_limit_size" specifies the limit in bytes
      * "buffer_queue_limit" specifies the limit in chunks (where chunk size = "buffer_chunk_limit")
      * "queue_limit_length" another spelling for "buffer_queue_limit"
      6e48b30d
  7. 17 Nov, 2021 2 commits
  8. 09 Nov, 2021 4 commits
    • Rehab's avatar
      By default the "google_cloud" output plugin for fluentd · 6cbbee36
      Rehab authored
      uses a max queue size of 512 MB.  This limit is enforced
      regardless of the chunk size.
      
      Fluentd accumulates log records in a local buffer until
      either that buffer is full or a configurable amount of time
      has elapsed since the last flush.  If for some reason the
      write call to stackdriver fails, the buffer can be added
      to a local in-memory queue and retried a little later.
      That queue has a max size (specified in either bytes or
      chunks).  Upon reaching that max queue size, an exception
      is thrown.  Example with walk-through:
      
      gitlab-com/gl-infra/production#5754 (comment 710964990)
      
      That exception also increments an error counter in
      the prometheus metric for the "google_cloud" plugin
      and its parent "copy" plugin.  These errors can trigger
      an alert, which is how we currently detect the symptom
      of the backlog queue having saturated.
      
      Fluentd does eventually catch up, so for now, we can
      increase the max size of the queue, but ideally we would
      also like to avoid accumulating a large backlog.
      
      For background, here is a list of findings so far:
      gitlab-com/gl-infra/production#5754 (comment 710965379)
      6cbbee36
    • Matt Smiley's avatar
      Configure max queue size for output to stackdriver · dad80a17
      Matt Smiley authored and Rehab's avatar Rehab committed
      dad80a17
    • Rehab's avatar
      Merge branch 'update-ubuntu1604-image' into 'master' · 0f2c11fd
      Rehab authored
      fix: update deprecated ubuntu 1604 image for kitchen tests
      
      Closes #5
      
      See merge request !253
      0f2c11fd
    • Rehab's avatar
      1607507d
  9. 04 Nov, 2021 2 commits
  10. 02 Nov, 2021 2 commits
    • Andrew Newdigate's avatar
      Merge branch 'ab/wraparound-vacuum' into 'master' · 47fc1ec9
      Andrew Newdigate authored
      Parse vacuum to prevent wraparound correctly
      
      Closes gitlab-org/database-team/team-tasks#189
      
      See merge request !248
      47fc1ec9
    • Andreas Brandl's avatar
      Parse vacuum to prevent wraparound correctly · 574c6eca
      Andreas Brandl authored
      When given an instance of vacuum to prevent wraparound, we fail to parse
      the log line (so it wouldn't have any parsed fields, which potentially
      prevents it from showing up in searches).
      
      This adds support for the following example:
      
      ```
      automatic aggressive vacuum to prevent wraparound of table
      "gitlabhq_production.pg_toast.pg_toast_559976213": index scans: 0
      pages: 0 removed, 0 remain, 0 skipped due to pins, 0 skipped frozen
      tuples: 0 removed, 0 remain, 0 are dead but not yet removable, oldest
      xmin: 1075895098
      buffer usage: 25 hits, 1 misses, 1 dirtied
      avg read rate: 5.923 MB/s, avg write rate: 5.923 MB/s
      system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
      ```
      
      Closes gitlab-org/database-team/team-tasks#189
      574c6eca
  11. 28 Oct, 2021 2 commits
  12. 26 Oct, 2021 2 commits
  13. 13 Oct, 2021 6 commits
  14. 11 Oct, 2021 2 commits
  15. 30 Jul, 2021 2 commits
  16. 29 Jul, 2021 4 commits
  17. 28 Jul, 2021 1 commit
  18. 27 Jul, 2021 1 commit