- Apr 21, 2021
-
-
Dylan Griffith authored
As part of our efforts to avoid using joins in Elasticsearch queries &2054 we need to "denormalize" (copy them into child docs) the permission related fields needed for searching. In order to allow us to search for merge requests without joining to the project we need to store the `merge_requests_access_level` as well as the `visibility_level` of the project on the merge request record. This MR also introduces an extra field `project_id` for the merge request in Elasticsearch which is redundant since we already have `target_project_id` and `project_id` is just aliased to this value but adding it to Elasticsearch will make the query logic simpler to share across all document types. Previously they were all able to consistently join to a parent "project" so it helps when changing the code they all have a field called `project_id`. As well as saving these new fields with the merge requests we need to also update these fields when they change which we also do in this MR. We need to track updates in a few places: 1. When a `ProjectFeature` record is changed (this is where `merge_requests_access_level` lives) 2. When a `Project` is updated (this is where `visibility_level` lives) 3. When a project is moved to another group. This logic was already implemented generically to delegate to `Project` but we update the spec for this just to be safe. The change to index these new fields also introduced an N+1 query which required an update to `MergeRequestClassProxy` to preload these new fields we're now setting in Elasticsearch. Changelog: changed
-
- Apr 19, 2021
-
-
- Apr 15, 2021
-
-
-
-
-
Brett Walker authored
for reference filters
-
Brett Walker authored
in prep for refactoring. Did not make changes in the files in order to better preserve history of the original file (a rename instead of add/delete)
-
George Koltsov authored
- Bulk Import EE files were previously in incorrect location under `ee/lib/ee/bulk_imports`. Move all of the files to `ee/lib/bulk_imports` since they do not have corresponding CE files
-
-
-
Record epic destroy event on usage ping
-
- Apr 14, 2021
-
-
-
During a merge, we attempt to find a matching merge request with a SHA using a replica that should be up-to-date with the primary for a given PostgreSQL log sequence number (LSN). However, there is a race condition that can happen if service discovery alters the host list after this check has taken place. This most likely happens when a Web worker starts: 1. When Rails starts up for the first time, there is a 1-minute or 2-minute delay before service discovery finds replicas (see #271575). 2. During this time `LoadBalancer#all_caught_up?` will return `true`. This will indicate to the Web worker that it can use replicas and does not have to use the primary. 3. During a request, service discovery may load all the replicas and change the host list. As a result, the next read may be directed to a lagging replica. However, this may cause a merge to fail if it cannot find a match. When a user merges a merge request, Sidekiq logs the minimum LSN needed to match a merge request for the API. If we have this LSN, we now: 1. Select from the available list of replicas that meet this LSN requirement. 2. Store this subset for the given request. 3. Round-robin reads with this subset of replicas. Relates to #247857
-
-
-
-
- Apr 13, 2021
-
-
Jason Goodman authored
Link to group or project memberships page
-
- Apr 12, 2021
-
-
-
Amy Qualls authored
Fixing lots of random instances of "merge request" where it was miscapitalized. The "M" should only be capitalized at the beginning of a sentence or slug, and the "R" should never be capitalized.
-
Kassio Borges authored
The Pipelines are grouped in Stages, like the CI jobs. Stages run in sequence, one after the other, while the pipelines within a stage run in parallel. Each pipeline runs on an, individual, `BulkImports::PipelineWorker` job, which enables: - smaller/shorter jobs: Background jobs can be interrupted during deploys or other unexpected infrastructure/ops events. To make jobs more resilient it's desirable to have smaller jobs wherever possible. - faster imports: Some pipelines can run in parallel, which reduces the total time of an import. - (follow-up) network/rate limits handling: When a pipeline gets rate limitted, it can be schedule to retry after the rate limit timeout. gitlab-org/gitlab#262024
-
Patrick Bajao authored
These actions/endpoints are not under code review feature category. They're are under the following categories: - `code_testing` - `usability_testing` - `continuous_integration`
-
-
- Apr 09, 2021
-
-
-
Kassio Borges authored
-
Sarah Yasonik authored
-
Catalin Irimie authored
Because the internal URL is something internal to the Geo sites, and the external URL is what users would access in the first place, this updates the secondary login OAuth flow to always redirect users to the external URL even if an internal URL is set for the primary.
-
- Apr 08, 2021
-
-
Jeremy Jackson authored
- This is intended to capture if the experiment system is broken or not working as intended — in a way that will hopefully surface to the experiment team in an obvious way. - This also is much more explicit in how we test the gitlab_experiment iglu/snowplow context.
-
Karthik Sivadas authored
Contributes to #322739
-
Adam Hegyi authored
This change adds pagination to the value stream analytics record list.
-
George Koltsov authored
-
Felipe authored
Record epic issues moved between projects with Redis HLL.
-
Gosia Ksionek authored
-
Rajendra Kadam authored
Fix specs with the new methods
-
- Apr 07, 2021
-
-
-
-
Quang-Minh Nguyen authored
-
Priyan Sureshbabu authored
-
Dylan Griffith authored
Since we already had an N+1 test this required a couple of changes to detect these problems: 1. Add `:elastic` to the test setup in order to allow for GitLab to finish setting up the Elasticsearch index and ensure that the `remove_permissions_data_from_notes_documents` was considered finished. Without this we couldn't trigger the logic to load project permissions for the notes https://gitlab.com/gitlab-org/gitlab/-/blob/02f78e50879aa2e9facafe724a61e20d4640a342/ee/lib/elastic/latest/note_instance_proxy.rb#L33 2. We also needed to move a `stub_const` into a separate test block so it didn't interfere with the N+1 test for notes. Without this we weren't actually processing all the notes and the test was a false positive. The `stub_const` here was only necessary for the one test that actually tests this limit. It was actually interfering with some other N+1 tests where we were adding more than 10 documents. As such moving this closer to the relevant test makes more sense. It also required updating a few of the other tests to stop referring to the `limit` variable. This value was always equal to the number of documents in `fake_refs` anyway so it was unnecessary to have this variable.