- Feb 08, 2024
-
-
Bob Van Landuyt authored
Since 9c62fa8f, labkit does not know what keys might be part of the context. But I forgot to remove this dead method. The method as it currently stands would fail, since the `KNOWN_KEYS` constant does not exist.
-
- Apr 19, 2023
-
-
Alejandro Rodríguez authored
Since we decided to go with a unified header approach
-
- Oct 05, 2022
-
-
- Jan 25, 2022
-
-
Bob Van Landuyt authored
It would be handy if all projects used the same keys to log data, which is why I initially added this list, but having to modify all of the projects each time would become cumbersome to keep it in sync will become cumbersome. So this changes that all keys are allowed: when passing them in through Labkit, a meta. prefix is added (for logs). These will be propagated by Labkit when found in Sidekiq jobs. Now we need to pay attention in the other projects that will push into a context that they use the same format. But having duplicate information logged in different fields is likely better than losing information.
-
- Jul 08, 2021
-
-
Matthias Käppler authored
-
- Apr 13, 2021
-
-
Takuya Noguchi authored
Signed-off-by:
Takuya Noguchi <takninnovationresearch@gmail.com>
-
- Mar 04, 2021
-
-
Alex Buijs authored
-
- Mar 03, 2021
-
-
Bob Van Landuyt authored
-
- Dec 16, 2020
-
-
Igor authored
-
- Dec 14, 2020
-
-
Sean McGivern authored
`Labkit::Context#to_headers` works like `#to_h`, but: 1. Excludes the correlation ID (which will already be set in another header). 2. Has keys suitable for use in HTTP headers, prefixed with `Labkit::Context::HEADER_PREFIX`. Because the keys go through `.log_key`, this means it will mostly be `X-Gitlab-Meta-` as a prefix.
-
- Oct 08, 2020
-
-
Sean McGivern authored
This field is present in our metrics but not our logs, which is the wrong way around: our logs should be a superset of our metric labels.
-
- Mar 20, 2020
-
-
Bob Van Landuyt authored
This field will be used for the `ReactiveCachingWorker` which can perform different kinds of work based on the class that calls it.
-
- Jan 10, 2020
-
-
Oswaldo Ferreira authored
-
- Dec 18, 2019
-
-
Bob Van Landuyt authored
The context can be passed from the client applications as a hash. Every time a new context is added, it will inherit the values of the previous context. Those previous values will be overridden by any new ones passed. A context always has a correlation id that is not empty. If an empty correlation id is passed, a new one will be generated. There is a `Labkit::Middleware::Rack` that will set the correlation id to the current request id. Because of this, every newly started context within a rack request will get the same correlation id. Using the `Labkit::Middleware::Sidekiq::Client` middleware the context active at the moment of scheduling a job will be serialized into the job. The `Labkit::Middleware::Sidekiq::Server` middleware will load the context that was stored on the job, so newly scheduled jobs would start with the same context. This context could be modified by the job. In this iteration the following extra values can be specified on the context: - user - project - root_namespace **Usage of the Labkit::Context** The preferred way of specifying a context is using a block: Labkit::Context.with_context( user: 'jane.doe', project: 'jane.doe/cool' ) do |context| # application code end If there's no way to wrap the application code into a block (for example for the grape api), then context can be pushed and popped: def context @context ||= Labkit::Context.push(user: 'jane.doe') end before { context } after { Labkit::Context.pop(context) } It is possible to provide procs when assigning values to the context: Labkit::Context.with_context(user: -> { current_user }) { # ... } This proc will only be executed when needed: When serialising the context into a log or a job. This happens when calling `Labkit::Context.to_h`.
-