- Mar 02, 2020
-
-
Craig Furman authored
If these logs are sent to Elasticsearch, it will not be able to process nested object fields, as this causes a type mismatch with scalar elements in the same array across log lines. This is a second attempt, as the first (reverted) one modified the actual job object that was used by sidekiq.
-
- Feb 29, 2020
-
- Feb 28, 2020
-
-
Craig Furman authored
If these logs are sent to Elasticsearch, it will not be able to process nested object fields, as this causes a type mismatch with scalar elements in the same array across log lines.
-
- Feb 17, 2020
-
-
Sidekiq stores a job's error details in the payload for the _next_ run, so that it can display the error in the Sidekiq UI. This is because Sidekiq's main state is the queue of jobs to be run. However, in our logs, this is very confusing, because we shouldn't have any error at all when a job starts, and we already add an error message and class to our logs when a job fails.
-
- Feb 14, 2020
-
-
Sean McGivern authored
We did this for Sidekiq arguments, but not for HTTP request params. We now do the same everywhere: Sidekiq arguments, Grape params, and Rails controller params. As the params start life as hashes, the order is defined by whatever's creating the hashes.
-
- Jan 10, 2020
-
-
Stan Hu authored
Previously when an exception occurred in Sidekiq, Sidekiq would export logs with timestamps (e.g. created_at, enqueued_at) in floating point seconds, while other jobs would report in ISO 8601 format. This inconsistency in data types would cause Elasticsearch to drop logs that did not match the schema type (date in most cases). This commit moves the responsibility of formatting timestamps to the Sidekiq JSON formatter where it properly belongs. The job logger now generates timestamps with floats, just as Sidekiq does. This ensures that timestamps are manipulated only in one place. See https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/8269
-
- Jan 07, 2020
-
-
Sean McGivern authored
Sidekiq JSON logs have total duration, queuing time, Gitaly time, and CPU time. They don't (before this change) have database time. We provide two fields: db_duration and db_duration_s. That's because the units between the different duration fields are currently confusing, so providing an explicit unit moves us closer to that goal, while keeping the raw figure in the un-suffixed fields.
-
- Dec 17, 2019
-
-
Aakriti Gupta authored
This is done to standardize timestamp format in log files
-
- Oct 28, 2019
-
-
Adds a Prometheus histogram, `sidekiq_jobs_queue_duration_seconds` for recording the duration that a Sidekiq job is queued for before being executed. This matches the scheduling_latency_s field emitted from structured logging for the same purpose.
-
- Oct 11, 2019
-
-
Qingyu Zhao authored
When measure Sidekiq job CPU time usage, `Process.times` is wrong because it counts all threads CPU time in current Sidekiq proces. Use `Process.clock_gettime(Process::CLOCK_THREAD_CPUTIME_ID)` instead Removed `system_s`, `user_s`, and `child_s` - since we can not get these values for the job thread. Added `cpu_s`, this is CPU time used by the job thread, including system time and user time
-
- Sep 23, 2019
-
-
Stan Hu authored
As mentioned in https://github.com/mperham/sidekiq/wiki/Error-Handling, Sidekiq can be configured with an exception handler. We use this to log the exception in a structured way so that `corrrelation_id`, `class`, and other useful fields are available. The previous error backtrace in the `StructuredLogger` class did not provide useful information because Sidekiq swallows the exception and raises a `JobRetry::Skip` exception. Closes #29425
-
- Aug 22, 2019
-
-
- Aug 09, 2019
-
-
Stan Hu authored
This will help identify Sidekiq jobs that invoke excessive number of filesystem access. The timing data is stored in `RequestStore`, but this is only active within the middleware and is not directly accessible to the Sidekiq logger. However, it is possible for the middleware to modify the job hash to pass this data along to the logger.
-
Stan Hu authored
This number was reporting a negative number because `current_time` was a monotonic counter, not an absolute time. Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/65748
-
- Jul 31, 2019
-
-
- Jan 22, 2019
-
-
Sean McGivern authored
When logging arguments from Sidekiq to JSON, restrict the size of `args` to 10 KB (when converted to JSON). This is to avoid blowing up with excessively large job payloads.
-
- Nov 20, 2018
-
-
gfyoung authored
Enables frozen string for the following: * lib/gitlab/patch/**/*.rb * lib/gitlab/popen/**/*.rb * lib/gitlab/profiler/**/*.rb * lib/gitlab/project_authorizations/**/*.rb * lib/gitlab/prometheus/**/*.rb * lib/gitlab/query_limiting/**/*.rb * lib/gitlab/quick_actions/**/*.rb * lib/gitlab/redis/**/*.rb * lib/gitlab/request_profiler/**/*.rb * lib/gitlab/search/**/*.rb * lib/gitlab/sherlock/**/*.rb * lib/gitlab/sidekiq_middleware/**/*.rb * lib/gitlab/slash_commands/**/*.rb * lib/gitlab/sql/**/*.rb * lib/gitlab/template/**/*.rb * lib/gitlab/testing/**/*.rb * lib/gitlab/utils/**/*.rb * lib/gitlab/webpack/**/*.rb Partially addresses gitlab-org/gitlab-ce#47424.
-
- Apr 04, 2018
-