Skip to content

Implement direct reassignment for placeholder user records

What does this MR do and why?

Introduce a new reassignment process that directly reassigns records associated with placeholder users without using placeholder references.

This approach leverages the unique relationship between placeholderusers and the reassigned user - each placeholder user is linked to exactly one external user/reassigned user, meaning all contributionsassigned to a placeholder user belong to the same external user.Therefore, when a group owner reassigns contributions from the placeholder users, we can replace all occurrences of a placeholder user ID with the reassigned user ID.

This optimization does not apply to Import User types, where contributions can belong to multiple external users and require the usage of placeholder reference table to maintain proper attribution. So, when the Import::SourceUser#placeholder_user is a import type, placeholder references are still used.

See more information about user contribution mapping in development docs

Database

The new reassignment process will execute queries such as:

UPDATE "notes" SET "author_id" = REASSIGNED_USER_ID WHERE ("notes"."id") IN (SELECT "notes"."id" FROM "notes" WHERE "notes"."author_id" = PLACEHOLDER_USER_USER_ID LIMIT 500)

Here, PLACEHOLDER_USER_USER_ID is the ID of the placeholder user assigned to the contribution, and REASSIGNED_USER_ID is the ID of the new user to whom the contributions will be reassigned.

This method resembles the approach used by Users::MigrateRecordsToGhostUserService, which reassigns user contributions to the ghost user.

The above query will run for all models and attributes listed in the MODEL LIST. An RSpec test has been added to ensure an index exists for each relevant table and attribute.

To avoid database saturation, a pause is incorporated between each batch update, similar to the current reassign procedure (see existing reassign process).

Query plans

Delete placeholder references

Query plan

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/137015

DELETE FROM "import_source_user_placeholder_references"
WHERE "import_source_user_placeholder_references"."source_user_id" = 24
    AND "import_source_user_placeholder_references"."id" >= 1
 ModifyTable on public.import_source_user_placeholder_references  (cost=0.70..8605.07 rows=0 width=0) (actual time=5.603..5.603 rows=0 loops=1)
   Buffers: shared hit=88 read=9 dirtied=8
   WAL: records=52 fpi=8 bytes=68012
   I/O Timings: read=5.164 write=0.000
   ->  Index Scan using idx_import_source_user_placeholder_references_on_user_model_id on public.import_source_user_placeholder_references  (cost=0.70..8605.07 rows=7111 width=6) (actual time=0.794..5.441 rows=52 loops=1)
         Index Cond: ((import_source_user_placeholder_references.source_user_id = 24) AND (import_source_user_placeholder_references.id >= 1))
         Buffers: shared hit=28 read=9
         I/O Timings: read=5.164 write=0.000
Settings: work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off', random_page_cost = '1.5', seq_page_cost = '4'
Time: 6.857 ms  
  - planning: 1.144 ms  
  - execution: 5.713 ms  
    - I/O read: 5.164 ms  
    - I/O write: 0.000 ms  
  
Shared buffers:  
  - hits: 88 (~704.00 KiB) from the buffer pool  
  - reads: 9 (~72.00 KiB) from the OS file cache, including disk I/O  
  - dirtied: 8 (~64.00 KiB)  
  - writes: 0

Batch update query plans

"approval_project_rules_users"."user_id"

https://console.postgres.ai/gitlab/gitlab-production-main/sessions/44640/commands/136942

UPDATE
    "approval_project_rules_users"
SET
    "user_id" = 1
WHERE ("approval_project_rules_users"."id") IN (
    SELECT
        "approval_project_rules_users"."id"
    FROM
        "approval_project_rules_users"
    WHERE
        "approval_project_rules_users"."user_id" = 11165152
    LIMIT 500)
 ModifyTable on public.approval_project_rules_users  (cost=28.43..92.36 rows=0 width=0) (actual time=3.499..3.502 rows=0 loops=1)
   Buffers: shared read=3
   I/O Timings: read=3.393 write=0.000
   ->  Nested Loop  (cost=28.43..92.36 rows=18 width=42) (actual time=3.497..3.499 rows=0 loops=1)
         Buffers: shared read=3
         I/O Timings: read=3.393 write=0.000
         ->  HashAggregate  (cost=27.88..28.06 rows=18 width=40) (actual time=3.495..3.497 rows=0 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared read=3
               I/O Timings: read=3.393 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.43..27.83 rows=18 width=40) (actual time=3.490..3.492 rows=0 loops=1)
                     Buffers: shared read=3
                     I/O Timings: read=3.393 write=0.000
                     ->  Limit  (cost=0.43..27.65 rows=18 width=8) (actual time=3.489..3.490 rows=0 loops=1)
                           Buffers: shared read=3
                           I/O Timings: read=3.393 write=0.000
                           ->  Index Scan using index_approval_project_rules_users_2 on public.approval_project_rules_users approval_project_rules_users_1  (cost=0.43..27.65 rows=18 width=8) (actual time=3.486..3.487 rows=0 loops=1)
                                 Index Cond: (approval_project_rules_users_1.user_id = 11165152)
                                 Buffers: shared read=3
                                 I/O Timings: read=3.393 write=0.000
         ->  Index Scan using approval_project_rules_users_pkey on public.approval_project_rules_users  (cost=0.56..3.57 rows=1 width=14) (actual time=0.000..0.000 rows=0 loops=0)
               Index Cond: (approval_project_rules_users.id = "ANY_subquery".id)
               I/O Timings: read=0.000 write=0.000
Settings: work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off', random_page_cost = '1.5', seq_page_cost = '4'
Time: 5.405 ms
  - planning: 1.666 ms
  - execution: 3.739 ms
    - I/O read: 3.393 ms
    - I/O write: 0.000 ms

Shared buffers:
  - hits: 0 from the buffer pool
  - reads: 3 (~24.00 KiB) from the OS file cache, including disk I/O
  - dirtied: 0
  - writes: 0
"approvals"."user_id"

https://console.postgres.ai/gitlab/gitlab-production-main/sessions/44640/commands/136943

UPDATE
    "approvals"
SET
    "user_id" = 1
WHERE ("approvals"."id") IN (
    SELECT
        "approvals"."id"
    FROM
        "approvals"
    WHERE
        "approvals"."user_id" = 11165152
    LIMIT 500)
 ModifyTable on public.approvals  (cost=646.12..2444.30 rows=0 width=0) (actual time=87.752..87.756 rows=0 loops=1)
   Buffers: shared hit=12491 read=57 dirtied=1307
   WAL: records=3720 fpi=1306 bytes=9748037
   I/O Timings: read=50.124 write=0.000
   ->  Nested Loop  (cost=646.12..2444.30 rows=500 width=38) (actual time=8.287..13.371 rows=412 loops=1)
         Buffers: shared hit=2479 dirtied=411
         WAL: records=414 fpi=411 bytes=3376206
         I/O Timings: read=0.000 write=0.000
         ->  HashAggregate  (cost=645.55..650.55 rows=500 width=32) (actual time=8.253..8.640 rows=412 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=417 dirtied=411
               WAL: records=414 fpi=411 bytes=3376206
               I/O Timings: read=0.000 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.57..644.30 rows=500 width=32) (actual time=0.087..7.968 rows=412 loops=1)
                     Buffers: shared hit=417 dirtied=411
                     WAL: records=414 fpi=411 bytes=3376206
                     I/O Timings: read=0.000 write=0.000
                     ->  Limit  (cost=0.57..639.30 rows=500 width=4) (actual time=0.074..7.614 rows=412 loops=1)
                           Buffers: shared hit=417 dirtied=411
                           WAL: records=414 fpi=411 bytes=3376206
                           I/O Timings: read=0.000 write=0.000
                           ->  Index Scan using index_approvals_on_user_id_and_merge_request_id on public.approvals approvals_1  (cost=0.57..658.46 rows=515 width=4) (actual time=0.072..7.473 rows=412 loops=1)
                                 Index Cond: (approvals_1.user_id = 11165152)
                                 Buffers: shared hit=417 dirtied=411
                                 WAL: records=414 fpi=411 bytes=3376206
                                 I/O Timings: read=0.000 write=0.000
         ->  Index Scan using approvals_pkey on public.approvals  (cost=0.57..3.59 rows=1 width=10) (actual time=0.010..0.010 rows=1 loops=412)
               Index Cond: (approvals.id = "ANY_subquery".id)
               Buffers: shared hit=2060
               I/O Timings: read=0.000 write=0.000
Trigger trigger_038fe84feff7 for constraint : time=1.914 calls=412
Settings: random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off'
Time: 89.536 ms
  - planning: 1.573 ms
  - execution: 87.963 ms
    - I/O read: 50.124 ms
    - I/O write: 0.000 ms

Shared buffers:
  - hits: 12491 (~97.60 MiB) from the buffer pool
  - reads: 57 (~456.00 KiB) from the OS file cache, including disk I/O
  - dirtied: 1307 (~10.20 MiB)
  - writes: 0
"award_emoji"."user_id"

https://console.postgres.ai/gitlab/gitlab-production-main/sessions/44640/commands/136944

UPDATE
    "award_emoji"
SET
    "user_id" = 1
WHERE ("award_emoji"."id") IN (
    SELECT
        "award_emoji"."id"
    FROM
        "award_emoji"
    WHERE
        "award_emoji"."user_id" = 11165152
    LIMIT 500)
 ModifyTable on public.award_emoji  (cost=210.53..781.82 rows=0 width=0) (actual time=1893.875..1893.879 rows=0 loops=1)
   Buffers: shared hit=8652 read=1727 dirtied=1323 written=9
   WAL: records=2423 fpi=1303 bytes=9592016
   I/O Timings: read=1741.344 write=3.808
   ->  Nested Loop  (cost=210.53..781.82 rows=165 width=38) (actual time=582.814..1160.091 rows=500 loops=1)
         Buffers: shared hit=1458 read=1031 dirtied=3
         WAL: records=3 fpi=3 bytes=23771
         I/O Timings: read=1120.301 write=0.000
         ->  HashAggregate  (cost=210.09..211.74 rows=165 width=32) (actual time=579.674..582.565 rows=500 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=21 read=468 dirtied=3
               WAL: records=3 fpi=3 bytes=23771
               I/O Timings: read=569.761 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.56..209.68 rows=165 width=32) (actual time=9.019..577.720 rows=500 loops=1)
                     Buffers: shared hit=21 read=468 dirtied=3
                     WAL: records=3 fpi=3 bytes=23771
                     I/O Timings: read=569.761 write=0.000
                     ->  Limit  (cost=0.56..208.03 rows=165 width=4) (actual time=8.997..576.208 rows=500 loops=1)
                           Buffers: shared hit=21 read=468 dirtied=3
                           WAL: records=3 fpi=3 bytes=23771
                           I/O Timings: read=569.761 write=0.000
                           ->  Index Scan using idx_award_emoji_on_user_emoji_name_awardable_type_awardable_id on public.award_emoji award_emoji_1  (cost=0.56..208.03 rows=165 width=4) (actual time=8.995..575.734 rows=500 loops=1)
                                 Index Cond: (award_emoji_1.user_id = 11165152)
                                 Buffers: shared hit=21 read=468 dirtied=3
                                 WAL: records=3 fpi=3 bytes=23771
                                 I/O Timings: read=569.761 write=0.000
         ->  Index Scan using award_emoji_pkey on public.award_emoji  (cost=0.44..3.46 rows=1 width=10) (actual time=1.146..1.146 rows=1 loops=500)
               Index Cond: (award_emoji.id = "ANY_subquery".id)
               Buffers: shared hit=1437 read=563
               I/O Timings: read=550.540 write=0.000
Settings: jit = 'off', random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB'
Time: 1.895 s
  - planning: 1.116 ms
  - execution: 1.894 s
    - I/O read: 1.741 s
    - I/O write: 3.808 ms

Shared buffers:
  - hits: 8652 (~67.60 MiB) from the buffer pool
  - reads: 1727 (~13.50 MiB) from the OS file cache, including disk I/O
  - dirtied: 1323 (~10.30 MiB)
  - writes: 9 (~72.00 KiB)
"board_assignees"."assignee_id"

https://console.postgres.ai/gitlab/gitlab-production-main/sessions/44640/commands/136945

UPDATE
    "board_assignees"
SET
    "assignee_id" = 1
WHERE ("board_assignees"."id") IN (
    SELECT
        "board_assignees"."id"
    FROM
        "board_assignees"
    WHERE
        "board_assignees"."assignee_id" = 11165152
    LIMIT 500)
 ModifyTable on public.board_assignees  (cost=6.49..16.14 rows=0 width=0) (actual time=11.375..11.379 rows=0 loops=1)
   Buffers: shared hit=61 read=13 dirtied=7
   WAL: records=18 fpi=6 bytes=27784
   I/O Timings: read=10.573 write=0.000
   ->  Nested Loop  (cost=6.49..16.14 rows=3 width=38) (actual time=3.535..3.562 rows=3 loops=1)
         Buffers: shared hit=8 read=4
         I/O Timings: read=3.396 write=0.000
         ->  HashAggregate  (cost=6.20..6.23 rows=3 width=32) (actual time=2.450..2.460 rows=3 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared read=3
               I/O Timings: read=2.351 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.29..6.20 rows=3 width=32) (actual time=2.435..2.443 rows=3 loops=1)
                     Buffers: shared read=3
                     I/O Timings: read=2.351 write=0.000
                     ->  Limit  (cost=0.29..6.17 rows=3 width=4) (actual time=2.415..2.421 rows=3 loops=1)
                           Buffers: shared read=3
                           I/O Timings: read=2.351 write=0.000
                           ->  Index Scan using index_board_assignees_on_assignee_id on public.board_assignees board_assignees_1  (cost=0.29..6.17 rows=3 width=4) (actual time=2.413..2.416 rows=3 loops=1)
                                 Index Cond: (board_assignees_1.assignee_id = 11165152)
                                 Buffers: shared read=3
                                 I/O Timings: read=2.351 write=0.000
         ->  Index Scan using board_assignees_pkey on public.board_assignees  (cost=0.29..3.30 rows=1 width=10) (actual time=0.364..0.364 rows=1 loops=3)
               Index Cond: (board_assignees.id = "ANY_subquery".id)
               Buffers: shared hit=8 read=1
               I/O Timings: read=1.045 write=0.000
Trigger RI_ConstraintTrigger_c_31689 for constraint fk_rails_1c0ff59e82: time=79.084 calls=3
Settings: seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off', random_page_cost = '1.5'
Time: 91.865 ms
  - planning: 1.223 ms
  - execution: 90.642 ms
    - I/O read: 10.573 ms
    - I/O write: 0.000 ms

Shared buffers:
  - hits: 61 (~488.00 KiB) from the buffer pool
  - reads: 13 (~104.00 KiB) from the OS file cache, including disk I/O
  - dirtied: 7 (~56.00 KiB)
  - writes: 0
"ci_pipeline_schedules"."owner_id"

https://postgres.ai/console/gitlab/gitlab-production-ci/sessions/44644/commands/137002

UPDATE
    "ci_pipeline_schedules"
SET
    "owner_id" = 1
WHERE ("ci_pipeline_schedules"."id") IN (
    SELECT
        "ci_pipeline_schedules"."id"
    FROM
        "ci_pipeline_schedules"
    WHERE
        "ci_pipeline_schedules"."owner_id" = 11165152
    LIMIT 500)
ModifyTable on public.ci_pipeline_schedules  (cost=6.93..16.86 rows=0 width=0) (actual time=16.625..16.628 rows=0 loops=1)
   Buffers: shared hit=85 read=33 dirtied=19
   WAL: records=24 fpi=18 bytes=92020
   ->  Nested Loop  (cost=6.93..16.86 rows=3 width=38) (actual time=5.039..5.105 rows=4 loops=1)
         Buffers: shared hit=12 read=11
         ->  HashAggregate  (cost=6.51..6.54 rows=3 width=32) (actual time=4.279..4.286 rows=4 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared read=7
               ->  Subquery Scan on ANY_subquery  (cost=0.42..6.50 rows=3 width=32) (actual time=2.821..4.270 rows=4 loops=1)
                     Buffers: shared read=7
                     ->  Limit  (cost=0.42..6.47 rows=3 width=4) (actual time=2.810..4.256 rows=4 loops=1)
                           Buffers: shared read=7
                           ->  Index Scan using index_ci_pipeline_schedules_on_owner_id on public.ci_pipeline_schedules ci_pipeline_schedules_1  (cost=0.42..6.47 rows=3 width=4) (actual time=2.808..4.254 rows=4 loops=1)
                                 Index Cond: (ci_pipeline_schedules_1.owner_id = 11165152)
                                 Buffers: shared read=7
         ->  Index Scan using ci_pipeline_schedules_pkey on public.ci_pipeline_schedules  (cost=0.42..3.44 rows=1 width=10) (actual time=0.202..0.202 rows=1 loops=4)
               Index Cond: (ci_pipeline_schedules.id = "ANY_subquery".id)
               Buffers: shared hit=12 read=4
Settings: work_mem = '100MB', effective_cache_size = '338688MB', random_page_cost = '1.5', jit = 'off', seq_page_cost = '4'
Time: 17.623 ms  
  - planning: 0.882 ms  
  - execution: 16.741 ms  
    - I/O read: N/A  
    - I/O write: N/A  
  
Shared buffers:  
  - hits: 85 (~680.00 KiB) from the buffer pool  
  - reads: 33 (~264.00 KiB) from the OS file cache, including disk I/O  
  - dirtied: 19 (~152.00 KiB)  
  - writes: 0  
"design_management_versions"."author_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136947

UPDATE
    "design_management_versions"
SET
    "author_id" = 1
WHERE ("design_management_versions"."id") IN (
    SELECT
        "design_management_versions"."id"
    FROM
        "design_management_versions"
    WHERE
        "design_management_versions"."author_id" = 11165152
    LIMIT 500)
ModifyTable on public.design_management_versions  (cost=11.13..34.87 rows=0 width=0) (actual time=18.339..18.342 rows=0 loops=1)
   Buffers: shared hit=106 read=22 dirtied=10
   WAL: records=24 fpi=9 bytes=58959
   I/O Timings: read=16.738 write=0.000
   ->  Nested Loop  (cost=11.13..34.87 rows=7 width=42) (actual time=4.991..5.514 rows=3 loops=1)
         Buffers: shared hit=10 read=7
         I/O Timings: read=5.322 write=0.000
         ->  HashAggregate  (cost=10.70..10.77 rows=7 width=40) (actual time=4.388..4.395 rows=3 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared read=5
               I/O Timings: read=4.252 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.42..10.68 rows=7 width=40) (actual time=3.894..4.378 rows=3 loops=1)
                     Buffers: shared read=5
                     I/O Timings: read=4.252 write=0.000
                     ->  Limit  (cost=0.42..10.61 rows=7 width=8) (actual time=3.876..4.357 rows=3 loops=1)
                           Buffers: shared read=5
                           I/O Timings: read=4.252 write=0.000
                           ->  Index Scan using index_design_management_versions_on_author_id on public.design_management_versions design_management_versions_1  (cost=0.42..10.61 rows=7 width=8) (actual time=3.874..4.349 rows=3 loops=1)
                                 Index Cond: (design_management_versions_1.author_id = 11165152)
                                 Buffers: shared read=5
                                 I/O Timings: read=4.252 write=0.000
         ->  Index Scan using design_management_versions_pkey on public.design_management_versions  (cost=0.42..3.44 rows=1 width=14) (actual time=0.369..0.369 rows=1 loops=3)
               Index Cond: (design_management_versions.id = "ANY_subquery".id)
               Buffers: shared hit=10 read=2
               I/O Timings: read=1.070 write=0.000
Trigger RI_ConstraintTrigger_c_30409 for constraint fk_c1440b4896: time=2.903 calls=3
Trigger trigger_96a76ee9f147 for constraint : time=0.236 calls=3
Settings: random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off'
Time: 22.666 ms  
  - planning: 1.271 ms  
  - execution: 21.395 ms  
    - I/O read: 16.738 ms  
    - I/O write: 0.000 ms  
  
Shared buffers:  
  - hits: 106 (~848.00 KiB) from the buffer pool  
  - reads: 22 (~176.00 KiB) from the OS file cache, including disk I/O  
  - dirtied: 10 (~80.00 KiB)  
  - writes: 0  
"epics"."assignee_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136948

UPDATE
    "epics"
SET
    "assignee_id" = 1
WHERE ("epics"."id") IN (
    SELECT
        "epics"."id"
    FROM
        "epics"
    WHERE
        "epics"."assignee_id" = 11165152
    LIMIT 500)
ModifyTable on public.epics  (cost=3.88..6.91 rows=0 width=0) (actual time=3.350..3.354 rows=0 loops=1)
   Buffers: shared read=3
   I/O Timings: read=3.239 write=0.000
   ->  Nested Loop  (cost=3.88..6.91 rows=1 width=38) (actual time=3.348..3.351 rows=0 loops=1)
         Buffers: shared read=3
         I/O Timings: read=3.239 write=0.000
         ->  HashAggregate  (cost=3.45..3.46 rows=1 width=32) (actual time=3.348..3.350 rows=0 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared read=3
               I/O Timings: read=3.239 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.42..3.45 rows=1 width=32) (actual time=3.343..3.345 rows=0 loops=1)
                     Buffers: shared read=3
                     I/O Timings: read=3.239 write=0.000
                     ->  Limit  (cost=0.42..3.44 rows=1 width=4) (actual time=3.342..3.343 rows=0 loops=1)
                           Buffers: shared read=3
                           I/O Timings: read=3.239 write=0.000
                           ->  Index Scan using index_epics_on_assignee_id on public.epics epics_1  (cost=0.42..3.44 rows=1 width=4) (actual time=3.339..3.340 rows=0 loops=1)
                                 Index Cond: (epics_1.assignee_id = 11165152)
                                 Buffers: shared read=3
                                 I/O Timings: read=3.239 write=0.000
         ->  Index Scan using epics_pkey on public.epics  (cost=0.42..3.44 rows=1 width=10) (actual time=0.000..0.000 rows=0 loops=0)
               Index Cond: (epics.id = "ANY_subquery".id)
               I/O Timings: read=0.000 write=0.000
Settings: random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off'
Time: 6.054 ms  
  - planning: 2.428 ms  
  - execution: 3.626 ms  
    - I/O read: 3.239 ms  
    - I/O write: 0.000 ms  
  
Shared buffers:  
  - hits: 0 from the buffer pool  
  - reads: 3 (~24.00 KiB) from the OS file cache, including disk I/O  
  - dirtied: 0  
  - writes: 0  
"epics"."author_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136949

UPDATE
    "epics"
SET
    "author_id" = 1
WHERE ("epics"."id") IN (
    SELECT
        "epics"."id"
    FROM
        "epics"
    WHERE
        "epics"."author_id" = 11165152
    LIMIT 500)
ModifyTable on public.epics  (cost=321.81..1068.04 rows=0 width=0) (actual time=961.369..961.374 rows=0 loops=1)
   Buffers: shared hit=12009 read=1161 dirtied=1017 written=11
   WAL: records=3898 fpi=1004 bytes=7075628
   I/O Timings: read=873.550 write=1.050
   ->  Nested Loop  (cost=321.81..1068.04 rows=218 width=38) (actual time=192.405..207.666 rows=215 loops=1)
         Buffers: shared hit=846 read=226 dirtied=11
         WAL: records=12 fpi=11 bytes=85864
         I/O Timings: read=199.263 write=0.000
         ->  HashAggregate  (cost=321.39..323.57 rows=218 width=32) (actual time=190.033..190.734 rows=215 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared read=212 dirtied=11
               WAL: records=12 fpi=11 bytes=85864
               I/O Timings: read=185.473 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.42..320.84 rows=218 width=32) (actual time=4.976..189.332 rows=215 loops=1)
                     Buffers: shared read=212 dirtied=11
                     WAL: records=12 fpi=11 bytes=85864
                     I/O Timings: read=185.473 write=0.000
                     ->  Limit  (cost=0.42..318.66 rows=218 width=4) (actual time=4.952..188.783 rows=215 loops=1)
                           Buffers: shared read=212 dirtied=11
                           WAL: records=12 fpi=11 bytes=85864
                           I/O Timings: read=185.473 write=0.000
                           ->  Index Scan using index_epics_on_author_id on public.epics epics_1  (cost=0.42..318.66 rows=218 width=4) (actual time=4.949..188.616 rows=215 loops=1)
                                 Index Cond: (epics_1.author_id = 11165152)
                                 Buffers: shared read=212 dirtied=11
                                 WAL: records=12 fpi=11 bytes=85864
                                 I/O Timings: read=185.473 write=0.000
         ->  Index Scan using epics_pkey on public.epics  (cost=0.42..3.41 rows=1 width=10) (actual time=0.074..0.074 rows=1 loops=215)
               Index Cond: (epics.id = "ANY_subquery".id)
               Buffers: shared hit=846 read=14
               I/O Timings: read=13.790 write=0.000
Trigger RI_ConstraintTrigger_c_28837 for constraint fk_3654b61b03: time=4.199 calls=215
Settings: random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off'
Time: 968.602 ms  
  - planning: 2.788 ms  
  - execution: 965.814 ms  
    - I/O read: 873.550 ms  
    - I/O write: 1.050 ms  
  
Shared buffers:  
  - hits: 12009 (~93.80 MiB) from the buffer pool  
  - reads: 1161 (~9.10 MiB) from the OS file cache, including disk I/O  
  - dirtied: 1017 (~7.90 MiB)  
  - writes: 11 (~88.00 KiB)
"epics"."closed_by_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136950

UPDATE
    "epics"
SET
    "closed_by_id" = 1
WHERE ("epics"."id") IN (
    SELECT
        "epics"."id"
    FROM
        "epics"
    WHERE
        "epics"."closed_by_id" = 11165152
    LIMIT 500)
ModifyTable on public.epics  (cost=153.47..512.11 rows=0 width=0) (actual time=324.305..324.310 rows=0 loops=1)
   Buffers: shared hit=6365 read=269 dirtied=227 written=5
   WAL: records=1860 fpi=222 bytes=1645881
   I/O Timings: read=281.516 write=0.324
   ->  Nested Loop  (cost=153.47..512.11 rows=104 width=38) (actual time=13.541..23.163 rows=100 loops=1)
         Buffers: shared hit=567 read=11
         WAL: records=41 fpi=0 bytes=2389
         I/O Timings: read=16.160 write=0.000
         ->  HashAggregate  (cost=153.05..154.09 rows=104 width=32) (actual time=13.509..13.834 rows=100 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=159 read=4
               WAL: records=41 fpi=0 bytes=2389
               I/O Timings: read=8.341 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.42..152.79 rows=104 width=32) (actual time=1.143..13.418 rows=100 loops=1)
                     Buffers: shared hit=159 read=4
                     WAL: records=41 fpi=0 bytes=2389
                     I/O Timings: read=8.341 write=0.000
                     ->  Limit  (cost=0.42..151.75 rows=104 width=4) (actual time=1.118..13.339 rows=100 loops=1)
                           Buffers: shared hit=159 read=4
                           WAL: records=41 fpi=0 bytes=2389
                           I/O Timings: read=8.341 write=0.000
                           ->  Index Scan using index_epics_on_closed_by_id on public.epics epics_1  (cost=0.42..151.75 rows=104 width=4) (actual time=1.116..13.315 rows=100 loops=1)
                                 Index Cond: (epics_1.closed_by_id = 11165152)
                                 Buffers: shared hit=159 read=4
                                 WAL: records=41 fpi=0 bytes=2389
                                 I/O Timings: read=8.341 write=0.000
         ->  Index Scan using epics_pkey on public.epics  (cost=0.42..3.44 rows=1 width=10) (actual time=0.089..0.089 rows=1 loops=100)
               Index Cond: (epics.id = "ANY_subquery".id)
               Buffers: shared hit=396 read=7
               I/O Timings: read=7.820 write=0.000
Trigger RI_ConstraintTrigger_c_30067 for constraint fk_aa5798e761: time=4.754 calls=100
Settings: random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off'
Time: 331.822 ms  
  - planning: 2.522 ms  
  - execution: 329.300 ms  
    - I/O read: 281.516 ms  
    - I/O write: 0.324 ms  
  
Shared buffers:  
  - hits: 6365 (~49.70 MiB) from the buffer pool  
  - reads: 269 (~2.10 MiB) from the OS file cache, including disk I/O  
  - dirtied: 227 (~1.80 MiB)  
  - writes: 5 (~40.00 KiB)  
"epics"."last_edited_by_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136951

UPDATE
    "epics"
SET
    "last_edited_by_id" = 1
WHERE ("epics"."id") IN (
    SELECT
        "epics"."id"
    FROM
        "epics"
    WHERE
        "epics"."last_edited_by_id" = 11165152
    LIMIT 500)
 ModifyTable on public.epics  (cost=248.54..828.59 rows=0 width=0) (actual time=257.018..257.023 rows=0 loops=1)
   Buffers: shared hit=10782 read=146 dirtied=152 written=19
   WAL: records=3162 fpi=133 bytes=1198817
   I/O Timings: read=143.115 write=2.375
   ->  Nested Loop  (cost=248.54..828.59 rows=169 width=38) (actual time=42.139..51.140 rows=172 loops=1)
         Buffers: shared hit=1048 read=9
         WAL: records=50 fpi=0 bytes=2872
         I/O Timings: read=19.444 write=0.000
         ->  HashAggregate  (cost=248.11..249.80 rows=169 width=32) (actual time=42.086..42.710 rows=172 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=315 read=4
               WAL: records=50 fpi=0 bytes=2872
               I/O Timings: read=15.329 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.42..247.69 rows=169 width=32) (actual time=0.066..38.234 rows=172 loops=1)
                     Buffers: shared hit=315 read=4
                     WAL: records=50 fpi=0 bytes=2872
                     I/O Timings: read=15.329 write=0.000
                     ->  Limit  (cost=0.42..246.00 rows=169 width=4) (actual time=0.054..36.513 rows=172 loops=1)
                           Buffers: shared hit=315 read=4
                           WAL: records=50 fpi=0 bytes=2872
                           I/O Timings: read=15.329 write=0.000
                           ->  Index Scan using index_epics_on_last_edited_by_id on public.epics epics_1  (cost=0.42..246.00 rows=169 width=4) (actual time=0.052..36.296 rows=172 loops=1)
                                 Index Cond: (epics_1.last_edited_by_id = 11165152)
                                 Buffers: shared hit=315 read=4
                                 WAL: records=50 fpi=0 bytes=2872
                                 I/O Timings: read=15.329 write=0.000
         ->  Index Scan using epics_pkey on public.epics  (cost=0.42..3.42 rows=1 width=10) (actual time=0.046..0.046 rows=1 loops=172)
               Index Cond: (epics.id = "ANY_subquery".id)
               Buffers: shared hit=705 read=5
               I/O Timings: read=4.115 write=0.000
Settings: random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off'
Time: 260.501 ms  
  - planning: 2.786 ms  
  - execution: 257.715 ms  
    - I/O read: 143.115 ms  
    - I/O write: 2.375 ms  
  
Shared buffers:  
  - hits: 10782 (~84.20 MiB) from the buffer pool  
  - reads: 146 (~1.10 MiB) from the OS file cache, including disk I/O  
  - dirtied: 152 (~1.20 MiB)  
  - writes: 19 (~152.00 KiB) 
"events"."author_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136952

UPDATE
    "events"
SET
    "author_id" = 1
WHERE ("events"."id") IN (
    SELECT
        "events"."id"
    FROM
        "events"
    WHERE
        "events"."author_id" = 11165152
    LIMIT 500)
ModifyTable on public.events  (cost=21.07..1824.24 rows=0 width=0) (actual time=6701.666..6701.671 rows=0 loops=1)
   Buffers: shared hit=33589 read=5985 dirtied=3642 written=21
   WAL: records=7665 fpi=3522 bytes=26628466
   I/O Timings: read=6411.463 write=9.048
   ->  Nested Loop  (cost=21.07..1824.24 rows=500 width=42) (actual time=92.514..1682.361 rows=500 loops=1)
         Buffers: shared hit=1398 read=1282 dirtied=3
         WAL: records=3 fpi=3 bytes=24259
         I/O Timings: read=1650.418 write=0.000
         ->  HashAggregate  (cost=20.49..25.49 rows=500 width=40) (actual time=83.833..86.911 rows=500 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=71 read=109 dirtied=3
               WAL: records=3 fpi=3 bytes=24259
               I/O Timings: read=81.508 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.70..19.24 rows=500 width=40) (actual time=12.329..83.275 rows=500 loops=1)
                     Buffers: shared hit=71 read=109 dirtied=3
                     WAL: records=3 fpi=3 bytes=24259
                     I/O Timings: read=81.508 write=0.000
                     ->  Limit  (cost=0.70..14.24 rows=500 width=8) (actual time=12.301..82.935 rows=500 loops=1)
                           Buffers: shared hit=71 read=109 dirtied=3
                           WAL: records=3 fpi=3 bytes=24259
                           I/O Timings: read=81.508 write=0.000
                           ->  Index Only Scan using index_events_on_author_id_and_id on public.events events_1  (cost=0.70..173.59 rows=6385 width=8) (actual time=12.298..82.793 rows=500 loops=1)
                                 Index Cond: (events_1.author_id = 11165152)
                                 Heap Fetches: 8
                                 Buffers: shared hit=71 read=109 dirtied=3
                                 WAL: records=3 fpi=3 bytes=24259
                                 I/O Timings: read=81.508 write=0.000
         ->  Index Scan using events_pkey on public.events  (cost=0.58..3.60 rows=1 width=14) (actual time=3.182..3.182 rows=1 loops=500)
               Index Cond: (events.id = "ANY_subquery".id)
               Buffers: shared hit=1327 read=1173
               I/O Timings: read=1568.910 write=0.000
Settings: random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off'
Time: 6.715 s  
  - planning: 12.906 ms  
  - execution: 6.702 s  
    - I/O read: 6.411 s  
    - I/O write: 9.048 ms  
  
Shared buffers:  
  - hits: 33589 (~262.40 MiB) from the buffer pool  
  - reads: 5985 (~46.80 MiB) from the OS file cache, including disk I/O  
  - dirtied: 3642 (~28.50 MiB)  
  - writes: 21 (~168.00 KiB)  
"issue_assignees"."user_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136953

UPDATE
    "issue_assignees"
SET
    "user_id" = 1
WHERE ("issue_assignees"."issue_id", "issue_assignees"."user_id") IN (
    SELECT
        "issue_assignees"."issue_id",
        "issue_assignees"."user_id"
    FROM
        "issue_assignees"
    WHERE
        "issue_assignees"."user_id" = 11165152
    LIMIT 500)
ModifyTable on public.issue_assignees  (cost=12.26..615.66 rows=0 width=0) (actual time=1282.249..1282.253 rows=0 loops=1)
   Buffers: shared hit=7532 read=1006 dirtied=795 written=2
   WAL: records=1987 fpi=784 bytes=5856635
   I/O Timings: read=1180.394 write=4.378
   ->  Nested Loop  (cost=12.26..615.66 rows=2 width=42) (actual time=52.383..478.579 rows=354 loops=1)
         Buffers: shared hit=1598 read=362 dirtied=3
         WAL: records=3 fpi=3 bytes=24551
         I/O Timings: read=461.623 write=0.000
         ->  HashAggregate  (cost=11.70..13.38 rows=168 width=40) (actual time=50.685..51.883 rows=354 loops=1)
               Group Key: "ANY_subquery".issue_id, "ANY_subquery".user_id
               Buffers: shared hit=153 read=37 dirtied=3
               WAL: records=3 fpi=3 bytes=24551
               I/O Timings: read=49.570 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.56..10.86 rows=168 width=40) (actual time=3.152..50.425 rows=354 loops=1)
                     Buffers: shared hit=153 read=37 dirtied=3
                     WAL: records=3 fpi=3 bytes=24551
                     I/O Timings: read=49.570 write=0.000
                     ->  Limit  (cost=0.56..9.18 rows=168 width=8) (actual time=3.123..50.253 rows=354 loops=1)
                           Buffers: shared hit=153 read=37 dirtied=3
                           WAL: records=3 fpi=3 bytes=24551
                           I/O Timings: read=49.570 write=0.000
                           ->  Index Only Scan using index_issue_assignees_on_user_id_and_issue_id on public.issue_assignees issue_assignees_1  (cost=0.56..9.18 rows=168 width=8) (actual time=3.121..50.192 rows=354 loops=1)
                                 Index Cond: (issue_assignees_1.user_id = 11165152)
                                 Heap Fetches: 26
                                 Buffers: shared hit=153 read=37 dirtied=3
                                 WAL: records=3 fpi=3 bytes=24551
                                 I/O Timings: read=49.570 write=0.000
         ->  Index Scan using index_issue_assignees_on_user_id_and_issue_id on public.issue_assignees  (cost=0.56..3.58 rows=1 width=14) (actual time=1.198..1.198 rows=1 loops=354)
               Index Cond: ((issue_assignees.user_id = "ANY_subquery".user_id) AND (issue_assignees.issue_id = "ANY_subquery".issue_id))
               Buffers: shared hit=1445 read=325
               I/O Timings: read=412.053 write=0.000
Trigger RI_ConstraintTrigger_c_29225 for constraint fk_5e0c8d9154: time=9.015 calls=354
Trigger trigger_97e9245e767d for constraint : time=15.320 calls=354
Settings: effective_cache_size = '472585MB', jit = 'off', random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB'
Time: 1.294 s  
  - planning: 2.282 ms  
  - execution: 1.292 s  
    - I/O read: 1.180 s  
    - I/O write: 4.378 ms  
  
Shared buffers:  
  - hits: 7532 (~58.80 MiB) from the buffer pool  
  - reads: 1006 (~7.90 MiB) from the OS file cache, including disk I/O  
  - dirtied: 795 (~6.20 MiB)  
  - writes: 2 (~16.00 KiB)  
"issues"."author_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136956

UPDATE
    "issues"
SET
    "author_id" = 1
WHERE ("issues"."id") IN (
    SELECT
        "issues"."id"
    FROM
        "issues"
    WHERE
        "issues"."author_id" = 11165152
    LIMIT 500)
ModifyTable on public.issues  (cost=546.97..1844.19 rows=0 width=0) (actual time=654.014..654.017 rows=0 loops=1)
   Buffers: shared hit=58936 read=773 dirtied=881 written=128
   WAL: records=13869 fpi=749 bytes=7078487
   I/O Timings: read=414.042 write=12.130
   ->  Nested Loop  (cost=546.97..1844.19 rows=361 width=38) (actual time=1.375..9.250 rows=500 loops=1)
         Buffers: shared hit=3825
         WAL: records=86 fpi=0 bytes=4912
         I/O Timings: read=0.000 write=0.000
         ->  HashAggregate  (cost=546.40..550.01 rows=361 width=32) (actual time=1.336..2.116 rows=500 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=484
               WAL: records=78 fpi=0 bytes=4454
               I/O Timings: read=0.000 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.57..545.50 rows=361 width=32) (actual time=0.047..1.207 rows=500 loops=1)
                     Buffers: shared hit=484
                     WAL: records=78 fpi=0 bytes=4454
                     I/O Timings: read=0.000 write=0.000
                     ->  Limit  (cost=0.57..541.89 rows=361 width=4) (actual time=0.039..1.126 rows=500 loops=1)
                           Buffers: shared hit=484
                           WAL: records=78 fpi=0 bytes=4454
                           I/O Timings: read=0.000 write=0.000
                           ->  Index Scan using index_issues_on_author_id on public.issues issues_1  (cost=0.57..541.89 rows=361 width=4) (actual time=0.038..1.093 rows=500 loops=1)
                                 Index Cond: (issues_1.author_id = 11165152)
                                 Buffers: shared hit=484
                                 WAL: records=78 fpi=0 bytes=4454
                                 I/O Timings: read=0.000 write=0.000
         ->  Index Scan using index_issues_on_id_and_weight on public.issues  (cost=0.57..3.58 rows=1 width=10) (actual time=0.011..0.011 rows=1 loops=500)
               Index Cond: (issues.id = "ANY_subquery".id)
               Buffers: shared hit=2900
               WAL: records=8 fpi=0 bytes=458
               I/O Timings: read=0.000 write=0.000
Trigger RI_ConstraintTrigger_c_28293 for constraint fk_05f1e72feb: time=5.657 calls=500
Trigger trigger_22262f5f16d8 for constraint : time=5.806 calls=500
Settings: random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off'
Time: 663.445 ms  
  - planning: 3.499 ms  
  - execution: 659.946 ms  
    - I/O read: 414.042 ms  
    - I/O write: 12.130 ms  
  
Shared buffers:  
  - hits: 58936 (~460.40 MiB) from the buffer pool  
  - reads: 773 (~6.00 MiB) from the OS file cache, including disk I/O  
  - dirtied: 881 (~6.90 MiB)  
  - writes: 128 (~1.00 MiB)  
"issues"."closed_by_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136957

UPDATE
    "issues"
SET
    "closed_by_id" = 1
WHERE ("issues"."id") IN (
    SELECT
        "issues"."id"
    FROM
        "issues"
    WHERE
        "issues"."closed_by_id" = 11165152
    LIMIT 500)
 ModifyTable on public.issues  (cost=591.03..2017.68 rows=0 width=0) (actual time=2968.221..2968.226 rows=0 loops=1)
   Buffers: shared hit=34570 read=2526 dirtied=2084 written=97
   WAL: records=8448 fpi=1938 bytes=12096426
   I/O Timings: read=2707.012 write=12.070
   ->  Nested Loop  (cost=591.03..2017.68 rows=397 width=38) (actual time=211.861..388.541 rows=307 loops=1)
         Buffers: shared hit=2385 read=277
         WAL: records=33 fpi=0 bytes=1885
         I/O Timings: read=373.603 write=0.000
         ->  HashAggregate  (cost=590.46..594.43 rows=397 width=32) (actual time=211.801..212.802 rows=307 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=531 read=105
               WAL: records=33 fpi=0 bytes=1885
               I/O Timings: read=206.511 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.57..589.47 rows=397 width=32) (actual time=0.062..211.258 rows=307 loops=1)
                     Buffers: shared hit=531 read=105
                     WAL: records=33 fpi=0 bytes=1885
                     I/O Timings: read=206.511 write=0.000
                     ->  Limit  (cost=0.57..585.50 rows=397 width=4) (actual time=0.053..210.876 rows=307 loops=1)
                           Buffers: shared hit=531 read=105
                           WAL: records=33 fpi=0 bytes=1885
                           I/O Timings: read=206.511 write=0.000
                           ->  Index Scan using index_issues_on_closed_by_id on public.issues issues_1  (cost=0.57..585.50 rows=397 width=4) (actual time=0.052..210.748 rows=307 loops=1)
                                 Index Cond: (issues_1.closed_by_id = 11165152)
                                 Buffers: shared hit=531 read=105
                                 WAL: records=33 fpi=0 bytes=1885
                                 I/O Timings: read=206.511 write=0.000
         ->  Index Scan using index_issues_on_id_and_weight on public.issues  (cost=0.57..3.58 rows=1 width=10) (actual time=0.566..0.566 rows=1 loops=307)
               Index Cond: (issues.id = "ANY_subquery".id)
               Buffers: shared hit=1682 read=172
               I/O Timings: read=167.092 write=0.000
Trigger RI_ConstraintTrigger_c_30444 for constraint fk_c63cbf6c25: time=5.027 calls=307
Trigger trigger_22262f5f16d8 for constraint : time=8.582 calls=307
Settings: random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off'
Time: 2.978 s  
  - planning: 4.779 ms  
  - execution: 2.974 s  
    - I/O read: 2.707 s  
    - I/O write: 12.070 ms  
  
Shared buffers:  
  - hits: 34570 (~270.10 MiB) from the buffer pool  
  - reads: 2526 (~19.70 MiB) from the OS file cache, including disk I/O  
  - dirtied: 2084 (~16.30 MiB)  
  - writes: 97 (~776.00 KiB) 
"issues"."updated_by_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136960

UPDATE
    "issues"
SET
    "updated_by_id" = 1
WHERE ("issues"."id") IN (
    SELECT
        "issues"."id"
    FROM
        "issues"
    WHERE
        "issues"."updated_by_id" = 11165152
    LIMIT 500)
 ModifyTable on public.issues  (cost=511.86..1744.38 rows=0 width=0) (actual time=14479.946..14479.976 rows=0 loops=1)
   Buffers: shared hit=256909 read=53402 dirtied=1368 written=193
   WAL: records=9634 fpi=640 bytes=6556545
   I/O Timings: read=13899.997 write=31.950
   ->  Nested Loop  (cost=511.86..1744.38 rows=343 width=38) (actual time=2.515..9.887 rows=330 loops=1)
         Buffers: shared hit=3289
         WAL: records=63 fpi=0 bytes=3605
         I/O Timings: read=0.000 write=0.000
         ->  HashAggregate  (cost=511.29..514.72 rows=343 width=32) (actual time=2.474..3.280 rows=330 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=537
               WAL: records=59 fpi=0 bytes=3375
               I/O Timings: read=0.000 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.56..510.43 rows=343 width=32) (actual time=0.054..2.381 rows=330 loops=1)
                     Buffers: shared hit=537
                     WAL: records=59 fpi=0 bytes=3375
                     I/O Timings: read=0.000 write=0.000
                     ->  Limit  (cost=0.56..507.00 rows=343 width=4) (actual time=0.045..2.308 rows=330 loops=1)
                           Buffers: shared hit=537
                           WAL: records=59 fpi=0 bytes=3375
                           I/O Timings: read=0.000 write=0.000
                           ->  Index Scan using index_issues_on_updated_by_id on public.issues issues_1  (cost=0.56..507.00 rows=343 width=4) (actual time=0.044..2.279 rows=330 loops=1)
                                 Index Cond: (issues_1.updated_by_id = 11165152)
                                 Buffers: shared hit=537
                                 WAL: records=59 fpi=0 bytes=3375
                                 I/O Timings: read=0.000 write=0.000
         ->  Index Scan using index_issues_on_id_and_weight on public.issues  (cost=0.57..3.58 rows=1 width=10) (actual time=0.017..0.017 rows=1 loops=330)
               Index Cond: (issues.id = "ANY_subquery".id)
               Buffers: shared hit=2465
               WAL: records=4 fpi=0 bytes=230
               I/O Timings: read=0.000 write=0.000
Trigger RI_ConstraintTrigger_c_31155 for constraint fk_ffed080f01: time=6.666 calls=330
Trigger trigger_22262f5f16d8 for constraint : time=4.928 calls=330
Settings: jit = 'off', random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB'
Time: 14.491 s  
  - planning: 3.829 ms  
  - execution: 14.487 s  
    - I/O read: 13.900 s  
    - I/O write: 31.950 ms  
  
Shared buffers:  
  - hits: 256909 (~2.00 GiB) from the buffer pool  
  - reads: 53402 (~417.20 MiB) from the OS file cache, including disk I/O  
  - dirtied: 1368 (~10.70 MiB)  
  - writes: 193 (~1.50 MiB)  
"lists"."user_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136961

UPDATE
    "lists"
SET
    "user_id" = 1
WHERE ("lists"."id") IN (
    SELECT
        "lists"."id"
    FROM
        "lists"
    WHERE
        "lists"."user_id" = 11165152
    LIMIT 500)
ModifyTable on public.lists  (cost=7.21..20.63 rows=0 width=0) (actual time=3.329..3.333 rows=0 loops=1)
   Buffers: shared read=3
   I/O Timings: read=3.221 write=0.000
   ->  Nested Loop  (cost=7.21..20.63 rows=4 width=38) (actual time=3.326..3.330 rows=0 loops=1)
         Buffers: shared read=3
         I/O Timings: read=3.221 write=0.000
         ->  HashAggregate  (cost=6.78..6.82 rows=4 width=32) (actual time=3.325..3.327 rows=0 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared read=3
               I/O Timings: read=3.221 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.43..6.77 rows=4 width=32) (actual time=3.320..3.322 rows=0 loops=1)
                     Buffers: shared read=3
                     I/O Timings: read=3.221 write=0.000
                     ->  Limit  (cost=0.43..6.73 rows=4 width=4) (actual time=3.319..3.320 rows=0 loops=1)
                           Buffers: shared read=3
                           I/O Timings: read=3.221 write=0.000
                           ->  Index Scan using index_lists_on_user_id on public.lists lists_1  (cost=0.43..6.73 rows=4 width=4) (actual time=3.317..3.317 rows=0 loops=1)
                                 Index Cond: (lists_1.user_id = 11165152)
                                 Buffers: shared read=3
                                 I/O Timings: read=3.221 write=0.000
         ->  Index Scan using lists_pkey on public.lists  (cost=0.43..3.45 rows=1 width=10) (actual time=0.000..0.000 rows=0 loops=0)
               Index Cond: (lists.id = "ANY_subquery".id)
               I/O Timings: read=0.000 write=0.000
Settings: random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off'
Time: 5.448 ms  
  - planning: 1.944 ms  
  - execution: 3.504 ms  
    - I/O read: 3.221 ms  
    - I/O write: 0.000 ms  
  
Shared buffers:  
  - hits: 0 from the buffer pool  
  - reads: 3 (~24.00 KiB) from the OS file cache, including disk I/O  
  - dirtied: 0  
  - writes: 0  
"merge_request_assignees"."user_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136962

UPDATE
    "merge_request_assignees"
SET
    "user_id" = 1
WHERE ("merge_request_assignees"."id") IN (
    SELECT
        "merge_request_assignees"."id"
    FROM
        "merge_request_assignees"
    WHERE
        "merge_request_assignees"."user_id" = 11165152
    LIMIT 500)
ModifyTable on public.merge_request_assignees  (cost=495.95..2146.63 rows=0 width=0) (actual time=2335.941..2335.945 rows=0 loops=1)
   Buffers: shared hit=8909 read=1736 dirtied=1173
   WAL: records=2429 fpi=1141 bytes=8364282
   I/O Timings: read=2230.847 write=0.000
   ->  Nested Loop  (cost=495.95..2146.63 rows=459 width=38) (actual time=593.856..1316.659 rows=355 loops=1)
         Buffers: shared hit=1168 read=974 dirtied=6
         WAL: records=11 fpi=6 bytes=49441
         I/O Timings: read=1284.640 write=0.000
         ->  HashAggregate  (cost=495.38..499.97 rows=459 width=32) (actual time=591.346..593.497 rows=355 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared read=367 dirtied=6
               WAL: records=11 fpi=6 bytes=49441
               I/O Timings: read=578.076 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.57..494.23 rows=459 width=32) (actual time=9.899..589.737 rows=355 loops=1)
                     Buffers: shared read=367 dirtied=6
                     WAL: records=11 fpi=6 bytes=49441
                     I/O Timings: read=578.076 write=0.000
                     ->  Limit  (cost=0.57..489.64 rows=459 width=4) (actual time=9.855..588.454 rows=355 loops=1)
                           Buffers: shared read=367 dirtied=6
                           WAL: records=11 fpi=6 bytes=49441
                           I/O Timings: read=578.076 write=0.000
                           ->  Index Scan using index_merge_request_assignees_on_user_id on public.merge_request_assignees merge_request_assignees_1  (cost=0.57..489.64 rows=459 width=4) (actual time=9.853..588.079 rows=355 loops=1)
                                 Index Cond: (merge_request_assignees_1.user_id = 11165152)
                                 Buffers: shared read=367 dirtied=6
                                 WAL: records=11 fpi=6 bytes=49441
                                 I/O Timings: read=578.076 write=0.000
         ->  Index Scan using merge_request_assignees_pkey on public.merge_request_assignees  (cost=0.57..3.59 rows=1 width=10) (actual time=2.030..2.030 rows=1 loops=355)
               Index Cond: (merge_request_assignees.id = "ANY_subquery".id)
               Buffers: shared hit=1168 read=607
               I/O Timings: read=706.564 write=0.000
Trigger RI_ConstraintTrigger_c_32446 for constraint fk_rails_579d375628: time=8.071 calls=355
Trigger trigger_44558add1625 for constraint : time=8.845 calls=355
Settings: jit = 'off', random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB'
Time: 2.346 s  
  - planning: 1.591 ms  
  - execution: 2.344 s  
    - I/O read: 2.231 s  
    - I/O write: 0.000 ms  
  
Shared buffers:  
  - hits: 8909 (~69.60 MiB) from the buffer pool  
  - reads: 1736 (~13.60 MiB) from the OS file cache, including disk I/O  
  - dirtied: 1173 (~9.20 MiB)  
  - writes: 0  
"merge_request_metrics"."latest_closed_by_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136963

UPDATE
    "merge_request_metrics"
SET
    "latest_closed_by_id" = 1
WHERE ("merge_request_metrics"."id") IN (
    SELECT
        "merge_request_metrics"."id"
    FROM
        "merge_request_metrics"
    WHERE
        "merge_request_metrics"."latest_closed_by_id" = 11165152
    LIMIT 500)
ModifyTable on public.merge_request_metrics  (cost=291.41..1280.84 rows=0 width=0) (actual time=971.316..971.321 rows=0 loops=1)
   Buffers: shared hit=2130 read=699 dirtied=409
   WAL: records=707 fpi=380 bytes=2492455
   I/O Timings: read=946.557 write=0.000
   ->  Nested Loop  (cost=291.41..1280.84 rows=275 width=42) (actual time=122.722..260.491 rows=59 loops=1)
         Buffers: shared hit=184 read=172 dirtied=11
         WAL: records=12 fpi=11 bytes=88252
         I/O Timings: read=254.345 write=0.000
         ->  HashAggregate  (cost=290.84..293.59 rows=275 width=40) (actual time=117.306..117.581 rows=59 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared read=61 dirtied=11
               WAL: records=12 fpi=11 bytes=88252
               I/O Timings: read=113.666 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.57..290.15 rows=275 width=40) (actual time=9.372..116.977 rows=59 loops=1)
                     Buffers: shared read=61 dirtied=11
                     WAL: records=12 fpi=11 bytes=88252
                     I/O Timings: read=113.666 write=0.000
                     ->  Limit  (cost=0.57..287.40 rows=275 width=8) (actual time=9.345..116.717 rows=59 loops=1)
                           Buffers: shared read=61 dirtied=11
                           WAL: records=12 fpi=11 bytes=88252
                           I/O Timings: read=113.666 write=0.000
                           ->  Index Scan using index_merge_request_metrics_on_latest_closed_by_id on public.merge_request_metrics merge_request_metrics_1  (cost=0.57..287.40 rows=275 width=8) (actual time=9.342..116.631 rows=59 loops=1)
                                 Index Cond: (merge_request_metrics_1.latest_closed_by_id = 11165152)
                                 Buffers: shared read=61 dirtied=11
                                 WAL: records=12 fpi=11 bytes=88252
                                 I/O Timings: read=113.666 write=0.000
         ->  Index Scan using merge_request_metrics_pkey on public.merge_request_metrics  (cost=0.57..3.59 rows=1 width=14) (actual time=2.415..2.415 rows=1 loops=59)
               Index Cond: (merge_request_metrics.id = "ANY_subquery".id)
               Buffers: shared hit=184 read=111
               I/O Timings: read=140.679 write=0.000
Trigger RI_ConstraintTrigger_c_30107 for constraint fk_ae440388cc: time=3.999 calls=59
Trigger nullify_merge_request_metrics_build_data_on_update for constraint : time=1.535 calls=59
Settings: random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off'
Time: 977.957 ms  
  - planning: 2.457 ms  
  - execution: 975.500 ms  
    - I/O read: 946.557 ms  
    - I/O write: 0.000 ms  
  
Shared buffers:  
  - hits: 2130 (~16.60 MiB) from the buffer pool  
  - reads: 699 (~5.50 MiB) from the OS file cache, including disk I/O  
  - dirtied: 409 (~3.20 MiB)  
  - writes: 0  
"merge_request_metrics"."merged_by_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136964

UPDATE
    "merge_request_metrics"
SET
    "merged_by_id" = 1
WHERE ("merge_request_metrics"."id") IN (
    SELECT
        "merge_request_metrics"."id"
    FROM
        "merge_request_metrics"
    WHERE
        "merge_request_metrics"."merged_by_id" = 11165152
    LIMIT 500)
ModifyTable on public.merge_request_metrics  (cost=654.04..2453.47 rows=0 width=0) (actual time=3548.935..3548.939 rows=0 loops=1)
   Buffers: shared hit=8745 read=2737 dirtied=1727 written=2
   WAL: records=2749 fpi=1669 bytes=11055935
   I/O Timings: read=3423.697 write=1.219
   ->  Nested Loop  (cost=654.04..2453.47 rows=500 width=42) (actual time=354.571..848.330 rows=209 loops=1)
         Buffers: shared hit=662 read=597 dirtied=23
         WAL: records=27 fpi=23 bytes=185825
         I/O Timings: read=828.812 write=0.000
         ->  HashAggregate  (cost=653.47..658.47 rows=500 width=40) (actual time=351.796..352.747 rows=209 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=1 read=213 dirtied=23
               WAL: records=27 fpi=23 bytes=185825
               I/O Timings: read=343.806 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.57..652.22 rows=500 width=40) (actual time=7.574..350.919 rows=209 loops=1)
                     Buffers: shared hit=1 read=213 dirtied=23
                     WAL: records=27 fpi=23 bytes=185825
                     I/O Timings: read=343.806 write=0.000
                     ->  Limit  (cost=0.57..647.22 rows=500 width=8) (actual time=7.549..350.205 rows=209 loops=1)
                           Buffers: shared hit=1 read=213 dirtied=23
                           WAL: records=27 fpi=23 bytes=185825
                           I/O Timings: read=343.806 write=0.000
                           ->  Index Scan using idx_merge_request_metrics_on_merged_by_project_and_mr on public.merge_request_metrics merge_request_metrics_1  (cost=0.57..1054.61 rows=815 width=8) (actual time=7.547..349.996 rows=209 loops=1)
                                 Index Cond: (merge_request_metrics_1.merged_by_id = 11165152)
                                 Buffers: shared hit=1 read=213 dirtied=23
                                 WAL: records=27 fpi=23 bytes=185825
                                 I/O Timings: read=343.806 write=0.000
         ->  Index Scan using merge_request_metrics_pkey on public.merge_request_metrics  (cost=0.57..3.59 rows=1 width=14) (actual time=2.363..2.363 rows=1 loops=209)
               Index Cond: (merge_request_metrics.id = "ANY_subquery".id)
               Buffers: shared hit=661 read=384
               I/O Timings: read=485.006 write=0.000
Trigger RI_ConstraintTrigger_c_29567 for constraint fk_7f28d925f3: time=4.786 calls=209
Trigger nullify_merge_request_metrics_build_data_on_update for constraint : time=5.409 calls=209
Settings: work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off', random_page_cost = '1.5', seq_page_cost = '4'
Time: 3.557 s  
  - planning: 2.550 ms  
  - execution: 3.554 s  
    - I/O read: 3.424 s  
    - I/O write: 1.219 ms  
  
Shared buffers:  
  - hits: 8745 (~68.30 MiB) from the buffer pool  
  - reads: 2737 (~21.40 MiB) from the OS file cache, including disk I/O  
  - dirtied: 1727 (~13.50 MiB)  
  - writes: 2 (~16.00 KiB)  
"merge_request_reviewers"."user_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136965

UPDATE
    "merge_request_reviewers"
SET
    "user_id" = 1
WHERE ("merge_request_reviewers"."id") IN (
    SELECT
        "merge_request_reviewers"."id"
    FROM
        "merge_request_reviewers"
    WHERE
        "merge_request_reviewers"."user_id" = 11165152
    LIMIT 500)
ModifyTable on public.merge_request_reviewers  (cost=541.67..2339.85 rows=0 width=0) (actual time=2334.016..2334.020 rows=0 loops=1)
   Buffers: shared hit=9275 read=1937 dirtied=1284 written=1
   WAL: records=2522 fpi=1244 bytes=9076222
   I/O Timings: read=2247.768 write=1.636
   ->  Nested Loop  (cost=541.67..2339.85 rows=500 width=46) (actual time=589.540..1296.367 rows=372 loops=1)
         Buffers: shared hit=1194 read=1051
         I/O Timings: read=1274.137 write=0.000
         ->  HashAggregate  (cost=541.10..546.10 rows=500 width=40) (actual time=584.960..586.266 rows=372 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=3 read=382
               I/O Timings: read=577.430 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.57..539.85 rows=500 width=40) (actual time=8.086..583.634 rows=372 loops=1)
                     Buffers: shared hit=3 read=382
                     I/O Timings: read=577.430 write=0.000
                     ->  Limit  (cost=0.57..534.85 rows=500 width=8) (actual time=8.049..582.582 rows=372 loops=1)
                           Buffers: shared hit=3 read=382
                           I/O Timings: read=577.430 write=0.000
                           ->  Index Scan using index_merge_request_reviewers_on_user_id on public.merge_request_reviewers merge_request_reviewers_1  (cost=0.57..663.08 rows=620 width=8) (actual time=8.046..582.245 rows=372 loops=1)
                                 Index Cond: (merge_request_reviewers_1.user_id = 11165152)
                                 Buffers: shared hit=3 read=382
                                 I/O Timings: read=577.430 write=0.000
         ->  Index Scan using merge_request_reviewers_pkey on public.merge_request_reviewers  (cost=0.57..3.59 rows=1 width=14) (actual time=1.903..1.903 rows=1 loops=372)
               Index Cond: (merge_request_reviewers.id = "ANY_subquery".id)
               Buffers: shared hit=1191 read=669
               I/O Timings: read=696.706 write=0.000
Trigger RI_ConstraintTrigger_c_32040 for constraint fk_rails_3704a66140: time=6.911 calls=372
Trigger trigger_8d17725116fe for constraint : time=7.650 calls=372
Settings: effective_cache_size = '472585MB', jit = 'off', random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB'
Time: 2.342 s  
  - planning: 1.274 ms  
  - execution: 2.341 s  
    - I/O read: 2.248 s  
    - I/O write: 1.636 ms  
  
Shared buffers:  
  - hits: 9275 (~72.50 MiB) from the buffer pool  
  - reads: 1937 (~15.10 MiB) from the OS file cache, including disk I/O  
  - dirtied: 1284 (~10.00 MiB)  
  - writes: 1 (~8.00 KiB)  
"merge_requests"."author_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136966

UPDATE
    "merge_requests"
SET
    "author_id" = 1
WHERE ("merge_requests"."id") IN (
    SELECT
        "merge_requests"."id"
    FROM
        "merge_requests"
    WHERE
        "merge_requests"."author_id" = 11165152
    LIMIT 500)
 ModifyTable on public.merge_requests  (cost=27.72..1827.15 rows=0 width=0) (actual time=14963.319..14963.322 rows=0 loops=1)
   Buffers: shared hit=50397 read=13392 dirtied=8016 written=5
   WAL: records=13760 fpi=7509 bytes=51258804
   I/O Timings: read=14281.653 write=4.106
   ->  Nested Loop  (cost=27.72..1827.15 rows=500 width=38) (actual time=234.606..1713.934 rows=360 loops=1)
         Buffers: shared hit=876 read=1280 dirtied=3
         WAL: records=3 fpi=3 bytes=16595
         I/O Timings: read=1688.414 write=0.000
         ->  HashAggregate  (cost=27.15..32.15 rows=500 width=32) (actual time=226.623..228.369 rows=360 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=70 read=286 dirtied=3
               WAL: records=3 fpi=3 bytes=16595
               I/O Timings: read=220.004 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.57..25.90 rows=500 width=32) (actual time=7.105..225.820 rows=360 loops=1)
                     Buffers: shared hit=70 read=286 dirtied=3
                     WAL: records=3 fpi=3 bytes=16595
                     I/O Timings: read=220.004 write=0.000
                     ->  Limit  (cost=0.57..20.90 rows=500 width=4) (actual time=7.083..225.147 rows=360 loops=1)
                           Buffers: shared hit=70 read=286 dirtied=3
                           WAL: records=3 fpi=3 bytes=16595
                           I/O Timings: read=220.004 write=0.000
                           ->  Index Only Scan using index_merge_requests_on_author_id_and_id on public.merge_requests merge_requests_1  (cost=0.57..33.01 rows=798 width=4) (actual time=7.081..224.962 rows=360 loops=1)
                                 Index Cond: (merge_requests_1.author_id = 11165152)
                                 Heap Fetches: 8
                                 Buffers: shared hit=70 read=286 dirtied=3
                                 WAL: records=3 fpi=3 bytes=16595
                                 I/O Timings: read=220.004 write=0.000
         ->  Index Scan using merge_requests_pkey on public.merge_requests  (cost=0.57..3.59 rows=1 width=10) (actual time=4.119..4.119 rows=1 loops=360)
               Index Cond: (merge_requests.id = "ANY_subquery".id)
               Buffers: shared hit=806 read=994
               I/O Timings: read=1468.410 write=0.000
Trigger RI_ConstraintTrigger_c_30857 for constraint fk_e719a85f8a: time=5.759 calls=360
Trigger trigger_ecc2780007c2 for constraint : time=1688.251 calls=360
Settings: random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off'
Time: 14.974 s  
  - planning: 4.633 ms  
  - execution: 14.969 s  
    - I/O read: 14.282 s  
    - I/O write: 4.106 ms  
  
Shared buffers:  
  - hits: 50397 (~393.70 MiB) from the buffer pool  
  - reads: 13392 (~104.60 MiB) from the OS file cache, including disk I/O  
  - dirtied: 8016 (~62.60 MiB)  
  - writes: 5 (~40.00 KiB)  
"merge_requests"."merge_user_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136968

UPDATE
    "merge_requests"
SET
    "merge_user_id" = 1
WHERE ("merge_requests"."id") IN (
    SELECT
        "merge_requests"."id"
    FROM
        "merge_requests"
    WHERE
        "merge_requests"."merge_user_id" = 11165152
    LIMIT 500)
ModifyTable on public.merge_requests  (cost=700.64..2489.27 rows=0 width=0) (actual time=6209.875..6209.879 rows=0 loops=1)
   Buffers: shared hit=26737 read=5289 dirtied=3715
   WAL: records=7181 fpi=3543 bytes=24030628
   I/O Timings: read=5884.820 write=0.000
   ->  Nested Loop  (cost=700.64..2489.27 rows=497 width=38) (actual time=285.033..590.566 rows=180 loops=1)
         Buffers: shared hit=669 read=430
         WAL: records=24 fpi=0 bytes=1368
         I/O Timings: read=576.243 write=0.000
         ->  HashAggregate  (cost=700.07..705.04 rows=497 width=32) (actual time=284.123..285.338 rows=180 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=31 read=156
               WAL: records=24 fpi=0 bytes=1368
               I/O Timings: read=279.084 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.44..698.83 rows=497 width=32) (actual time=3.010..283.346 rows=180 loops=1)
                     Buffers: shared hit=31 read=156
                     WAL: records=24 fpi=0 bytes=1368
                     I/O Timings: read=279.084 write=0.000
                     ->  Limit  (cost=0.44..693.86 rows=497 width=4) (actual time=2.981..282.645 rows=180 loops=1)
                           Buffers: shared hit=31 read=156
                           WAL: records=24 fpi=0 bytes=1368
                           I/O Timings: read=279.084 write=0.000
                           ->  Index Scan using index_merge_requests_on_merge_user_id on public.merge_requests merge_requests_1  (cost=0.44..693.86 rows=497 width=4) (actual time=2.979..282.463 rows=180 loops=1)
                                 Index Cond: (merge_requests_1.merge_user_id = 11165152)
                                 Buffers: shared hit=31 read=156
                                 WAL: records=24 fpi=0 bytes=1368
                                 I/O Timings: read=279.084 write=0.000
         ->  Index Scan using merge_requests_pkey on public.merge_requests  (cost=0.57..3.59 rows=1 width=10) (actual time=1.688..1.688 rows=1 loops=180)
               Index Cond: (merge_requests.id = "ANY_subquery".id)
               Buffers: shared hit=626 read=274
               I/O Timings: read=297.159 write=0.000
Trigger RI_ConstraintTrigger_c_1460002924 for constraint fk_ad525e1f87_tmp: time=4.618 calls=180
Trigger RI_ConstraintTrigger_c_30097 for constraint fk_ad525e1f87: time=1.850 calls=180
Trigger trigger_ecc2780007c2 for constraint : time=845.399 calls=180
Settings: effective_cache_size = '472585MB', jit = 'off', random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB'
Time: 6.221 s  
  - planning: 4.748 ms  
  - execution: 6.217 s  
    - I/O read: 5.885 s  
    - I/O write: 0.000 ms  
  
Shared buffers:  
  - hits: 26737 (~208.90 MiB) from the buffer pool  
  - reads: 5289 (~41.30 MiB) from the OS file cache, including disk I/O  
  - dirtied: 3715 (~29.00 MiB)  
  - writes: 0 
"merge_requests"."updated_by_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136969

UPDATE
    "merge_requests"
SET
    "updated_by_id" = 1
WHERE ("merge_requests"."id") IN (
    SELECT
        "merge_requests"."id"
    FROM
        "merge_requests"
    WHERE
        "merge_requests"."updated_by_id" = 11165152
    LIMIT 500)
ModifyTable on public.merge_requests  (cost=664.58..2374.01 rows=0 width=0) (actual time=374.648..374.653 rows=0 loops=1)
   Buffers: shared hit=10131 read=336 dirtied=255
   WAL: records=2366 fpi=242 bytes=1820436
   I/O Timings: read=307.372 write=0.000
   ->  Nested Loop  (cost=664.58..2374.01 rows=475 width=38) (actual time=14.770..27.550 rows=64 loops=1)
         Buffers: shared hit=394 read=23
         WAL: records=42 fpi=0 bytes=2394
         I/O Timings: read=24.627 write=0.000
         ->  HashAggregate  (cost=664.01..668.76 rows=475 width=32) (actual time=14.740..14.929 rows=64 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=70 read=8
               WAL: records=42 fpi=0 bytes=2394
               I/O Timings: read=13.501 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.56..662.82 rows=475 width=32) (actual time=0.095..14.677 rows=64 loops=1)
                     Buffers: shared hit=70 read=8
                     WAL: records=42 fpi=0 bytes=2394
                     I/O Timings: read=13.501 write=0.000
                     ->  Limit  (cost=0.56..658.07 rows=475 width=4) (actual time=0.087..14.625 rows=64 loops=1)
                           Buffers: shared hit=70 read=8
                           WAL: records=42 fpi=0 bytes=2394
                           I/O Timings: read=13.501 write=0.000
                           ->  Index Scan using index_merge_requests_on_updated_by_id on public.merge_requests merge_requests_1  (cost=0.56..658.07 rows=475 width=4) (actual time=0.086..14.607 rows=64 loops=1)
                                 Index Cond: (merge_requests_1.updated_by_id = 11165152)
                                 Buffers: shared hit=70 read=8
                                 WAL: records=42 fpi=0 bytes=2394
                                 I/O Timings: read=13.501 write=0.000
         ->  Index Scan using merge_requests_pkey on public.merge_requests  (cost=0.57..3.59 rows=1 width=10) (actual time=0.193..0.193 rows=1 loops=64)
               Index Cond: (merge_requests.id = "ANY_subquery".id)
               Buffers: shared hit=305 read=15
               I/O Timings: read=11.126 write=0.000
Trigger RI_ConstraintTrigger_c_1460002394 for constraint fk_641731faff_tmp: time=3.138 calls=64
Trigger RI_ConstraintTrigger_c_29295 for constraint fk_641731faff: time=0.822 calls=64
Trigger trigger_ecc2780007c2 for constraint : time=51.119 calls=64
Settings: random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off'
Time: 383.863 ms  
  - planning: 4.985 ms  
  - execution: 378.878 ms  
    - I/O read: 307.372 ms  
    - I/O write: 0.000 ms  
  
Shared buffers:  
  - hits: 10131 (~79.10 MiB) from the buffer pool  
  - reads: 336 (~2.60 MiB) from the OS file cache, including disk I/O  
  - dirtied: 255 (~2.00 MiB)  
  - writes: 0  
"notes"."author_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136970

UPDATE
    "notes"
SET
    "author_id" = 1
WHERE ("notes"."id") IN (
    SELECT
        "notes"."id"
    FROM
        "notes"
    WHERE
        "notes"."author_id" = 11165152
    LIMIT 500)
ModifyTable on public.notes  (cost=31.17..1834.34 rows=0 width=0) (actual time=5603.456..5603.461 rows=0 loops=1)
   Buffers: shared hit=25609 read=3943 dirtied=2451 written=44
   WAL: records=7271 fpi=2288 bytes=17908430
   I/O Timings: read=5241.573 write=8.226
   ->  Nested Loop  (cost=31.17..1834.34 rows=500 width=42) (actual time=117.715..1234.537 rows=500 loops=1)
         Buffers: shared hit=1783 read=1028 dirtied=1
         WAL: records=1 fpi=1 bytes=7869
         I/O Timings: read=1208.823 write=0.000
         ->  HashAggregate  (cost=30.59..35.59 rows=500 width=40) (actual time=111.803..114.099 rows=500 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=185 read=126 dirtied=1
               WAL: records=1 fpi=1 bytes=7869
               I/O Timings: read=109.511 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.70..29.34 rows=500 width=40) (actual time=11.762..111.270 rows=500 loops=1)
                     Buffers: shared hit=185 read=126 dirtied=1
                     WAL: records=1 fpi=1 bytes=7869
                     I/O Timings: read=109.511 write=0.000
                     ->  Limit  (cost=0.70..24.34 rows=500 width=8) (actual time=11.736..110.965 rows=500 loops=1)
                           Buffers: shared hit=185 read=126 dirtied=1
                           WAL: records=1 fpi=1 bytes=7869
                           I/O Timings: read=109.511 write=0.000
                           ->  Index Only Scan using index_notes_on_author_id_and_created_at_and_id on public.notes notes_1  (cost=0.70..293.07 rows=6184 width=8) (actual time=11.733..110.876 rows=500 loops=1)
                                 Index Cond: (notes_1.author_id = 11165152)
                                 Heap Fetches: 2
                                 Buffers: shared hit=185 read=126 dirtied=1
                                 WAL: records=1 fpi=1 bytes=7869
                                 I/O Timings: read=109.511 write=0.000
         ->  Index Scan using notes_pkey on public.notes  (cost=0.58..3.60 rows=1 width=14) (actual time=2.234..2.234 rows=1 loops=500)
               Index Cond: (notes.id = "ANY_subquery".id)
               Buffers: shared hit=1598 read=902
               I/O Timings: read=1099.313 write=0.000
Settings: jit = 'off', random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB'
Time: 5.616 s  
  - planning: 12.517 ms  
  - execution: 5.604 s  
    - I/O read: 5.242 s  
    - I/O write: 8.226 ms  
  
Shared buffers:  
  - hits: 25609 (~200.10 MiB) from the buffer pool  
  - reads: 3943 (~30.80 MiB) from the OS file cache, including disk I/O  
  - dirtied: 2451 (~19.10 MiB)  
  - writes: 44 (~352.00 KiB) 
"p_ci_builds"."user_id"

https://postgres.ai/console/gitlab/gitlab-production-ci/sessions/44644/commands/136982

UPDATE
    "p_ci_builds"
SET
    "user_id" = 1
WHERE ("p_ci_builds"."id") IN (
    SELECT
        "p_ci_builds"."id"
    FROM
        "p_ci_builds"
    WHERE
        "p_ci_builds"."user_id" = 11165152
    LIMIT 500)
ModifyTable on public.p_ci_builds  (cost=750.61..15191.28 rows=0 width=0) (actual time=4996.861..4996.869 rows=0 loops=1)
   Buffers: shared hit=48971 read=3450 dirtied=2165 written=4
   WAL: records=7692 fpi=2060 bytes=14251675
   ->  Nested Loop  (cost=750.61..15191.28 rows=500 width=50) (actual time=615.897..987.243 rows=500 loops=1)
         Buffers: shared hit=16359 read=562
         ->  HashAggregate  (cost=750.03..755.03 rows=500 width=40) (actual time=600.503..601.258 rows=500 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=3 read=418
               ->  Subquery Scan on ANY_subquery  (cost=0.58..748.78 rows=500 width=40) (actual time=7.151..599.953 rows=500 loops=1)
                     Buffers: shared hit=3 read=418
                     ->  Limit  (cost=0.58..743.78 rows=500 width=8) (actual time=7.133..599.421 rows=500 loops=1)
                           Buffers: shared hit=3 read=418
                           ->  Append  (cost=0.58..83529.84 rows=56196 width=8) (actual time=7.132..599.317 rows=500 loops=1)
                                 Buffers: shared hit=3 read=418
                                 ->  Index Scan using index_ci_builds_on_user_id on gitlab_partitions_dynamic.ci_builds p_ci_builds_11  (cost=0.58..32919.99 rows=22869 width=8) (actual time=7.130..599.195 rows=500 loops=1)
                                       Index Cond: (p_ci_builds_11.user_id = 11165152)
                                       Buffers: shared hit=3 read=418
                                 ->  Index Scan using ci_builds_101_user_id_idx on gitlab_partitions_dynamic.ci_builds_101 p_ci_builds_12  (cost=0.58..11409.57 rows=7535 width=8) (actual time=0.000..0.000 rows=0 loops=0)
                                       Index Cond: (p_ci_builds_12.user_id = 11165152)
                                 ->  Index Scan using ci_builds_102_user_id_idx on gitlab_partitions_dynamic.ci_builds_102 p_ci_builds_13  (cost=0.58..21471.83 rows=14277 width=8) (actual time=0.000..0.000 rows=0 loops=0)
                                       Index Cond: (p_ci_builds_13.user_id = 11165152)
                                 ->  Index Scan using ci_builds_103_user_id_idx on gitlab_partitions_dynamic.ci_builds_103 p_ci_builds_14  (cost=0.57..3824.57 rows=2520 width=8) (actual time=0.000..0.000 rows=0 loops=0)
                                       Index Cond: (p_ci_builds_14.user_id = 11165152)
                                 ->  Index Scan using ci_builds_104_user_id_idx on gitlab_partitions_dynamic.ci_builds_104 p_ci_builds_15  (cost=0.57..4373.80 rows=2898 width=8) (actual time=0.000..0.000 rows=0 loops=0)
                                       Index Cond: (p_ci_builds_15.user_id = 11165152)
                                 ->  Index Scan using ci_builds_105_user_id_idx on gitlab_partitions_dynamic.ci_builds_105 p_ci_builds_16  (cost=0.57..3904.40 rows=2567 width=8) (actual time=0.000..0.000 rows=0 loops=0)
                                       Index Cond: (p_ci_builds_16.user_id = 11165152)
                                 ->  Index Scan using ci_builds_106_user_id_idx on gitlab_partitions_dynamic.ci_builds_106 p_ci_builds_17  (cost=0.57..4140.94 rows=2738 width=8) (actual time=0.000..0.000 rows=0 loops=0)
                                       Index Cond: (p_ci_builds_17.user_id = 11165152)
                                 ->  Index Scan using ci_builds_107_user_id_idx on gitlab_partitions_dynamic.ci_builds_107 p_ci_builds_18  (cost=0.57..1203.77 rows=791 width=8) (actual time=0.000..0.000 rows=0 loops=0)
                                       Index Cond: (p_ci_builds_18.user_id = 11165152)
                                 ->  Seq Scan on gitlab_partitions_dynamic.ci_builds_108 p_ci_builds_19  (cost=0.00..0.00 rows=1 width=8) (actual time=0.000..0.000 rows=0 loops=0)
                                       Filter: (p_ci_builds_19.user_id = 11165152)
                                       Rows Removed by Filter: 0
         ->  Append  (cost=0.58..28.78 rows=9 width=18) (actual time=0.626..0.770 rows=1 loops=500)
               Buffers: shared hit=16356 read=144
               ->  Index Scan using ci_builds_pkey on gitlab_partitions_dynamic.ci_builds p_ci_builds_1  (cost=0.58..3.60 rows=1 width=18) (actual time=0.625..0.627 rows=1 loops=500)
                     Index Cond: (p_ci_builds_1.id = "ANY_subquery".id)
                     Buffers: shared hit=2384 read=116
               ->  Index Scan using ci_builds_101_pkey on gitlab_partitions_dynamic.ci_builds_101 p_ci_builds_2  (cost=0.58..3.59 rows=1 width=18) (actual time=0.011..0.011 rows=0 loops=500)
                     Index Cond: (p_ci_builds_2.id = "ANY_subquery".id)
                     Buffers: shared hit=1996 read=4
               ->  Index Scan using ci_builds_102_pkey on gitlab_partitions_dynamic.ci_builds_102 p_ci_builds_3  (cost=0.58..3.60 rows=1 width=18) (actual time=0.021..0.021 rows=0 loops=500)
                     Index Cond: (p_ci_builds_3.id = "ANY_subquery".id)
                     Buffers: shared hit=1996 read=4
               ->  Index Scan using ci_builds_103_pkey on gitlab_partitions_dynamic.ci_builds_103 p_ci_builds_4  (cost=0.57..3.59 rows=1 width=18) (actual time=0.028..0.028 rows=0 loops=500)
                     Index Cond: (p_ci_builds_4.id = "ANY_subquery".id)
                     Buffers: shared hit=1996 read=4
               ->  Index Scan using ci_builds_104_pkey on gitlab_partitions_dynamic.ci_builds_104 p_ci_builds_5  (cost=0.57..3.59 rows=1 width=18) (actual time=0.009..0.009 rows=0 loops=500)
                     Index Cond: (p_ci_builds_5.id = "ANY_subquery".id)
                     Buffers: shared hit=1996 read=4
               ->  Index Scan using ci_builds_105_pkey on gitlab_partitions_dynamic.ci_builds_105 p_ci_builds_6  (cost=0.57..3.59 rows=1 width=18) (actual time=0.018..0.018 rows=0 loops=500)
                     Index Cond: (p_ci_builds_6.id = "ANY_subquery".id)
                     Buffers: shared hit=1996 read=4
               ->  Index Scan using ci_builds_106_pkey on gitlab_partitions_dynamic.ci_builds_106 p_ci_builds_7  (cost=0.57..3.59 rows=1 width=18) (actual time=0.011..0.011 rows=0 loops=500)
                     Index Cond: (p_ci_builds_7.id = "ANY_subquery".id)
                     Buffers: shared hit=1996 read=4
               ->  Index Scan using ci_builds_107_pkey on gitlab_partitions_dynamic.ci_builds_107 p_ci_builds_8  (cost=0.57..3.58 rows=1 width=18) (actual time=0.025..0.025 rows=0 loops=500)
                     Index Cond: (p_ci_builds_8.id = "ANY_subquery".id)
                     Buffers: shared hit=1996 read=4
               ->  Seq Scan on gitlab_partitions_dynamic.ci_builds_108 p_ci_builds_9  (cost=0.00..0.00 rows=1 width=18) (actual time=0.001..0.001 rows=0 loops=500)
                     Filter: ("ANY_subquery".id = p_ci_builds_9.id)
                     Rows Removed by Filter: 0
Settings: effective_cache_size = '338688MB', random_page_cost = '1.5', jit = 'off', seq_page_cost = '4', work_mem = '100MB'
Time: 5.047 s  
  - planning: 49.849 ms  
  - execution: 4.998 s  
    - I/O read: N/A  
    - I/O write: N/A  
  
Shared buffers:  
  - hits: 48971 (~382.60 MiB) from the buffer pool  
  - reads: 3450 (~27.00 MiB) from the OS file cache, including disk I/O  
  - dirtied: 2165 (~16.90 MiB)  
  - writes: 4 (~32.00 KiB)  
"p_ci_pipelines"."user_id"

https://postgres.ai/console/gitlab/gitlab-production-ci/sessions/44644/commands/136983

UPDATE
    "p_ci_pipelines"
SET
    "user_id" = 1,
    "lock_version" = COALESCE("lock_version", 0) + 1
WHERE ("p_ci_pipelines"."id") IN (
    SELECT
        "p_ci_pipelines"."id"
    FROM
        "p_ci_pipelines"
    WHERE
        "p_ci_pipelines"."user_id" = 11165152
    LIMIT 500)
 ModifyTable on public.p_ci_pipelines  (cost=729.15..11542.32 rows=0 width=0) (actual time=6562.250..6562.257 rows=0 loops=1)
   Buffers: shared hit=50900 read=6462 dirtied=4980 written=10
   WAL: records=10154 fpi=4771 bytes=34653078
   ->  Nested Loop  (cost=729.15..11542.32 rows=500 width=50) (actual time=950.529..1636.743 rows=500 loops=1)
         Buffers: shared hit=11800 read=1200 dirtied=2
         WAL: records=2 fpi=2 bytes=15690
         ->  HashAggregate  (cost=728.57..733.57 rows=500 width=40) (actual time=941.092..942.523 rows=500 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=2 read=498 dirtied=2
               WAL: records=2 fpi=2 bytes=15690
               ->  Subquery Scan on ANY_subquery  (cost=0.58..727.32 rows=500 width=40) (actual time=10.644..939.496 rows=500 loops=1)
                     Buffers: shared hit=2 read=498 dirtied=2
                     WAL: records=2 fpi=2 bytes=15690
                     ->  Limit  (cost=0.58..722.32 rows=500 width=8) (actual time=10.622..938.163 rows=500 loops=1)
                           Buffers: shared hit=2 read=498 dirtied=2
                           WAL: records=2 fpi=2 bytes=15690
                           ->  Append  (cost=0.58..9299.50 rows=6442 width=8) (actual time=10.620..937.850 rows=500 loops=1)
                                 Buffers: shared hit=2 read=498 dirtied=2
                                 WAL: records=2 fpi=2 bytes=15690
                                 ->  Index Scan using index_ci_pipelines_on_user_id_and_created_at_and_source on gitlab_partitions_dynamic.ci_pipelines p_ci_pipelines_9  (cost=0.58..7117.92 rows=5032 width=8) (actual time=10.618..937.520 rows=500 loops=1)
                                       Index Cond: (p_ci_pipelines_9.user_id = 11165152)
                                       Buffers: shared hit=2 read=498 dirtied=2
                                       WAL: records=2 fpi=2 bytes=15690
                                 ->  Index Scan using ci_pipelines_103_user_id_created_at_source_idx on gitlab_partitions_dynamic.ci_pipelines_103 p_ci_pipelines_10  (cost=0.56..466.09 rows=305 width=8) (actual time=0.000..0.000 rows=0 loops=0)
                                       Index Cond: (p_ci_pipelines_10.user_id = 11165152)
                                 ->  Index Scan using ci_pipelines_104_user_id_created_at_source_idx on gitlab_partitions_dynamic.ci_pipelines_104 p_ci_pipelines_11  (cost=0.57..549.21 rows=362 width=8) (actual time=0.000..0.000 rows=0 loops=0)
                                       Index Cond: (p_ci_pipelines_11.user_id = 11165152)
                                 ->  Index Scan using ci_pipelines_105_user_id_created_at_source_idx on gitlab_partitions_dynamic.ci_pipelines_105 p_ci_pipelines_12  (cost=0.56..480.03 rows=314 width=8) (actual time=0.000..0.000 rows=0 loops=0)
                                       Index Cond: (p_ci_pipelines_12.user_id = 11165152)
                                 ->  Index Scan using ci_pipelines_106_user_id_created_at_source_idx on gitlab_partitions_dynamic.ci_pipelines_106 p_ci_pipelines_13  (cost=0.57..510.85 rows=335 width=8) (actual time=0.000..0.000 rows=0 loops=0)
                                       Index Cond: (p_ci_pipelines_13.user_id = 11165152)
                                 ->  Index Scan using ci_pipelines_107_user_id_created_at_source_idx on gitlab_partitions_dynamic.ci_pipelines_107 p_ci_pipelines_14  (cost=0.56..143.19 rows=93 width=8) (actual time=0.000..0.000 rows=0 loops=0)
                                       Index Cond: (p_ci_pipelines_14.user_id = 11165152)
                                 ->  Seq Scan on gitlab_partitions_dynamic.ci_pipelines_108 p_ci_pipelines_15  (cost=0.00..0.00 rows=1 width=8) (actual time=0.000..0.000 rows=0 loops=0)
                                       Filter: (p_ci_pipelines_15.user_id = 11165152)
                                       Rows Removed by Filter: 0
         ->  Append  (cost=0.58..21.54 rows=7 width=22) (actual time=1.306..1.384 rows=1 loops=500)
               Buffers: shared hit=11798 read=702
               ->  Index Scan using ci_pipelines_pkey on gitlab_partitions_dynamic.ci_pipelines p_ci_pipelines_1  (cost=0.58..3.59 rows=1 width=22) (actual time=1.305..1.309 rows=1 loops=500)
                     Index Cond: (p_ci_pipelines_1.id = "ANY_subquery".id)
                     Buffers: shared hit=1818 read=682
               ->  Index Scan using ci_pipelines_103_pkey on gitlab_partitions_dynamic.ci_pipelines_103 p_ci_pipelines_2  (cost=0.56..3.58 rows=1 width=22) (actual time=0.014..0.014 rows=0 loops=500)
                     Index Cond: (p_ci_pipelines_2.id = "ANY_subquery".id)
                     Buffers: shared hit=1996 read=4
               ->  Index Scan using ci_pipelines_104_pkey on gitlab_partitions_dynamic.ci_pipelines_104 p_ci_pipelines_3  (cost=0.57..3.58 rows=1 width=22) (actual time=0.010..0.010 rows=0 loops=500)
                     Index Cond: (p_ci_pipelines_3.id = "ANY_subquery".id)
                     Buffers: shared hit=1996 read=4
               ->  Index Scan using ci_pipelines_105_pkey on gitlab_partitions_dynamic.ci_pipelines_105 p_ci_pipelines_4  (cost=0.56..3.58 rows=1 width=22) (actual time=0.010..0.010 rows=0 loops=500)
                     Index Cond: (p_ci_pipelines_4.id = "ANY_subquery".id)
                     Buffers: shared hit=1996 read=4
               ->  Index Scan using ci_pipelines_106_pkey on gitlab_partitions_dynamic.ci_pipelines_106 p_ci_pipelines_5  (cost=0.57..3.58 rows=1 width=22) (actual time=0.011..0.011 rows=0 loops=500)
                     Index Cond: (p_ci_pipelines_5.id = "ANY_subquery".id)
                     Buffers: shared hit=1996 read=4
               ->  Index Scan using ci_pipelines_107_pkey on gitlab_partitions_dynamic.ci_pipelines_107 p_ci_pipelines_6  (cost=0.56..3.58 rows=1 width=22) (actual time=0.017..0.017 rows=0 loops=500)
                     Index Cond: (p_ci_pipelines_6.id = "ANY_subquery".id)
                     Buffers: shared hit=1996 read=4
               ->  Seq Scan on gitlab_partitions_dynamic.ci_pipelines_108 p_ci_pipelines_7  (cost=0.00..0.00 rows=1 width=22) (actual time=0.001..0.001 rows=0 loops=500)
                     Filter: ("ANY_subquery".id = p_ci_pipelines_7.id)
                     Rows Removed by Filter: 0
Settings: jit = 'off', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '338688MB', random_page_cost = '1.5'
Time: 6.579 s  
  - planning: 16.045 ms  
  - execution: 6.563 s  
    - I/O read: N/A  
    - I/O write: N/A  
  
Shared buffers:  
  - hits: 50900 (~397.70 MiB) from the buffer pool  
  - reads: 6462 (~50.50 MiB) from the OS file cache, including disk I/O  
  - dirtied: 4980 (~38.90 MiB)  
  - writes: 10 (~80.00 KiB)  
"protected_branch_merge_access_levels"."user_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136971

UPDATE
    "protected_branch_merge_access_levels"
SET
    "user_id" = 1
WHERE ("protected_branch_merge_access_levels"."id") IN (
    SELECT
        "protected_branch_merge_access_levels"."id"
    FROM
        "protected_branch_merge_access_levels"
    WHERE
        "protected_branch_merge_access_levels"."user_id" = 11165152
    LIMIT 500)
 ModifyTable on public.protected_branch_merge_access_levels  (cost=71.88..286.87 rows=0 width=0) (actual time=66.229..66.232 rows=0 loops=1)
   Buffers: shared hit=255 read=61 dirtied=19
   WAL: records=26 fpi=18 bytes=124703
   I/O Timings: read=61.877 write=0.000
   ->  Nested Loop  (cost=71.88..286.87 rows=60 width=38) (actual time=11.953..13.881 rows=3 loops=1)
         Buffers: shared hit=10 read=12
         I/O Timings: read=13.495 write=0.000
         ->  HashAggregate  (cost=71.32..71.92 rows=60 width=32) (actual time=8.929..8.940 rows=3 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared read=7
               I/O Timings: read=8.687 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.56..71.17 rows=60 width=32) (actual time=6.570..8.910 rows=3 loops=1)
                     Buffers: shared read=7
                     I/O Timings: read=8.687 write=0.000
                     ->  Limit  (cost=0.56..70.57 rows=60 width=4) (actual time=6.537..8.869 rows=3 loops=1)
                           Buffers: shared read=7
                           I/O Timings: read=8.687 write=0.000
                           ->  Index Scan using index_protected_branch_merge_access_levels_on_user_id on public.protected_branch_merge_access_levels protected_branch_merge_access_levels_1  (cost=0.56..70.57 rows=60 width=4) (actual time=6.535..8.865 rows=3 loops=1)
                                 Index Cond: (protected_branch_merge_access_levels_1.user_id = 11165152)
                                 Buffers: shared read=7
                                 I/O Timings: read=8.687 write=0.000
         ->  Index Scan using protected_branch_merge_access_levels_pkey on public.protected_branch_merge_access_levels  (cost=0.56..3.58 rows=1 width=10) (actual time=1.642..1.642 rows=1 loops=3)
               Index Cond: (protected_branch_merge_access_levels.id = "ANY_subquery".id)
               Buffers: shared hit=10 read=5
               I/O Timings: read=4.807 write=0.000
Trigger RI_ConstraintTrigger_c_31210 for constraint fk_protected_branch_merge_access_levels_user_id: time=3.063 calls=3
Trigger trigger_05cc4448a8aa for constraint : time=21.354 calls=3
Trigger trigger_0aea02e5a699 for constraint : time=0.169 calls=3
Settings: random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off'
Time: 70.631 ms  
  - planning: 1.188 ms  
  - execution: 69.443 ms  
    - I/O read: 61.877 ms  
    - I/O write: 0.000 ms  
  
Shared buffers:  
  - hits: 255 (~2.00 MiB) from the buffer pool  
  - reads: 61 (~488.00 KiB) from the OS file cache, including disk I/O  
  - dirtied: 19 (~152.00 KiB)  
  - writes: 0
"protected_branch_push_access_levels"."user_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136972

UPDATE
    "protected_branch_push_access_levels"
SET
    "user_id" = 1
WHERE ("protected_branch_push_access_levels"."id") IN (
    SELECT
        "protected_branch_push_access_levels"."id"
    FROM
        "protected_branch_push_access_levels"
    WHERE
        "protected_branch_push_access_levels"."user_id" = 11165152
    LIMIT 500)
ModifyTable on public.protected_branch_push_access_levels  (cost=108.31..441.84 rows=0 width=0) (actual time=48.463..48.466 rows=0 loops=1)
   Buffers: shared hit=324 read=45 dirtied=18
   WAL: records=30 fpi=17 bytes=106865
   I/O Timings: read=44.606 write=0.000
   ->  Nested Loop  (cost=108.31..441.84 rows=93 width=38) (actual time=17.051..18.422 rows=3 loops=1)
         Buffers: shared hit=11 read=11
         I/O Timings: read=18.091 write=0.000
         ->  HashAggregate  (cost=107.74..108.67 rows=93 width=32) (actual time=14.821..14.834 rows=3 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared read=7
               I/O Timings: read=14.638 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.56..107.51 rows=93 width=32) (actual time=11.285..14.807 rows=3 loops=1)
                     Buffers: shared read=7
                     I/O Timings: read=14.638 write=0.000
                     ->  Limit  (cost=0.56..106.58 rows=93 width=4) (actual time=11.267..14.780 rows=3 loops=1)
                           Buffers: shared read=7
                           I/O Timings: read=14.638 write=0.000
                           ->  Index Scan using index_protected_branch_push_access_levels_on_user_id on public.protected_branch_push_access_levels protected_branch_push_access_levels_1  (cost=0.56..106.58 rows=93 width=4) (actual time=11.265..14.775 rows=3 loops=1)
                                 Index Cond: (protected_branch_push_access_levels_1.user_id = 11165152)
                                 Buffers: shared read=7
                                 I/O Timings: read=14.638 write=0.000
         ->  Index Scan using protected_branch_push_access_levels_pkey on public.protected_branch_push_access_levels  (cost=0.56..3.58 rows=1 width=10) (actual time=1.189..1.189 rows=1 loops=3)
               Index Cond: (protected_branch_push_access_levels.id = "ANY_subquery".id)
               Buffers: shared hit=11 read=4
               I/O Timings: read=3.454 write=0.000
Trigger RI_ConstraintTrigger_c_31215 for constraint fk_protected_branch_push_access_levels_user_id: time=2.869 calls=3
Trigger trigger_009314eae986 for constraint : time=0.288 calls=3
Trigger trigger_62ad09879cf2 for constraint : time=0.488 calls=3
Trigger trigger_744ab45ee5ac for constraint : time=1.091 calls=3
Settings: jit = 'off', random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB'
Time: 52.932 ms  
  - planning: 1.442 ms  
  - execution: 51.490 ms  
    - I/O read: 44.606 ms  
    - I/O write: 0.000 ms  
  
Shared buffers:  
  - hits: 324 (~2.50 MiB) from the buffer pool  
  - reads: 45 (~360.00 KiB) from the OS file cache, including disk I/O  
  - dirtied: 18 (~144.00 KiB)  
  - writes: 0
"protected_branch_unprotect_access_levels"."user_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136973

UPDATE
    "protected_branch_unprotect_access_levels"
SET
    "user_id" = 1
WHERE ("protected_branch_unprotect_access_levels"."id") IN (
    SELECT
        "protected_branch_unprotect_access_levels"."id"
    FROM
        "protected_branch_unprotect_access_levels"
    WHERE
        "protected_branch_unprotect_access_levels"."user_id" = 11165152
    LIMIT 500)
ModifyTable on public.protected_branch_unprotect_access_levels  (cost=16.34..55.16 rows=0 width=0) (actual time=2.861..2.864 rows=0 loops=1)
   Buffers: shared read=3
   I/O Timings: read=2.795 write=0.000
   ->  Nested Loop  (cost=16.34..55.16 rows=11 width=38) (actual time=2.860..2.862 rows=0 loops=1)
         Buffers: shared read=3
         I/O Timings: read=2.795 write=0.000
         ->  HashAggregate  (cost=15.78..15.89 rows=11 width=32) (actual time=2.859..2.860 rows=0 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared read=3
               I/O Timings: read=2.795 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.43..15.76 rows=11 width=32) (actual time=2.857..2.858 rows=0 loops=1)
                     Buffers: shared read=3
                     I/O Timings: read=2.795 write=0.000
                     ->  Limit  (cost=0.43..15.65 rows=11 width=4) (actual time=2.856..2.856 rows=0 loops=1)
                           Buffers: shared read=3
                           I/O Timings: read=2.795 write=0.000
                           ->  Index Scan using index_protected_branch_unprotect_access_levels_on_user_id on public.protected_branch_unprotect_access_levels protected_branch_unprotect_access_levels_1  (cost=0.43..15.65 rows=11 width=4) (actual time=2.854..2.854 rows=0 loops=1)
                                 Index Cond: (protected_branch_unprotect_access_levels_1.user_id = 11165152)
                                 Buffers: shared read=3
                                 I/O Timings: read=2.795 write=0.000
         ->  Index Scan using protected_branch_unprotect_access_levels_pkey on public.protected_branch_unprotect_access_levels  (cost=0.55..3.57 rows=1 width=10) (actual time=0.000..0.000 rows=0 loops=0)
               Index Cond: (protected_branch_unprotect_access_levels.id = "ANY_subquery".id)
               I/O Timings: read=0.000 write=0.000
Settings: effective_cache_size = '472585MB', jit = 'off', random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB'
Time: 4.048 ms  
  - planning: 1.044 ms  
  - execution: 3.004 ms  
    - I/O read: 2.795 ms  
    - I/O write: 0.000 ms  
  
Shared buffers:  
  - hits: 0 from the buffer pool  
  - reads: 3 (~24.00 KiB) from the OS file cache, including disk I/O  
  - dirtied: 0  
  - writes: 0  
"protected_environment_deploy_access_levels"."user_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136974

UPDATE
    "protected_environment_deploy_access_levels"
SET
    "user_id" = 1
WHERE ("protected_environment_deploy_access_levels"."id") IN (
    SELECT
        "protected_environment_deploy_access_levels"."id"
    FROM
        "protected_environment_deploy_access_levels"
    WHERE
        "protected_environment_deploy_access_levels"."user_id" = 11165152
    LIMIT 500)
 ModifyTable on public.protected_environment_deploy_access_levels  (cost=8.16..21.54 rows=0 width=0) (actual time=1.857..1.860 rows=0 loops=1)
   Buffers: shared read=3
   I/O Timings: read=1.759 write=0.000
   ->  Nested Loop  (cost=8.16..21.54 rows=4 width=38) (actual time=1.855..1.857 rows=0 loops=1)
         Buffers: shared read=3
         I/O Timings: read=1.759 write=0.000
         ->  HashAggregate  (cost=7.74..7.78 rows=4 width=32) (actual time=1.853..1.855 rows=0 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared read=3
               I/O Timings: read=1.759 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.42..7.73 rows=4 width=32) (actual time=1.850..1.851 rows=0 loops=1)
                     Buffers: shared read=3
                     I/O Timings: read=1.759 write=0.000
                     ->  Limit  (cost=0.42..7.69 rows=4 width=4) (actual time=1.849..1.850 rows=0 loops=1)
                           Buffers: shared read=3
                           I/O Timings: read=1.759 write=0.000
                           ->  Index Scan using index_protected_environment_deploy_access_levels_on_user_id on public.protected_environment_deploy_access_levels protected_environment_deploy_access_levels_1  (cost=0.42..7.69 rows=4 width=4) (actual time=1.847..1.847 rows=0 loops=1)
                                 Index Cond: (protected_environment_deploy_access_levels_1.user_id = 11165152)
                                 Buffers: shared read=3
                                 I/O Timings: read=1.759 write=0.000
         ->  Index Scan using protected_environment_deploy_access_levels_pkey on public.protected_environment_deploy_access_levels  (cost=0.42..3.44 rows=1 width=10) (actual time=0.000..0.000 rows=0 loops=0)
               Index Cond: (protected_environment_deploy_access_levels.id = "ANY_subquery".id)
               I/O Timings: read=0.000 write=0.000
Settings: random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off'
Time: 3.324 ms  
  - planning: 1.295 ms  
  - execution: 2.029 ms  
    - I/O read: 1.759 ms  
    - I/O write: 0.000 ms  
  
Shared buffers:  
  - hits: 0 from the buffer pool  
  - reads: 3 (~24.00 KiB) from the OS file cache, including disk I/O  
  - dirtied: 0  
  - writes: 0  
  
"protected_tag_create_access_levels"."user_id"
UPDATE
    "protected_tag_create_access_levels"
SET
    "user_id" = 1
WHERE ("protected_tag_create_access_levels"."id") IN (
    SELECT
        "protected_tag_create_access_levels"."id"
    FROM
        "protected_tag_create_access_levels"
    WHERE
        "protected_tag_create_access_levels"."user_id" = 11165152
    LIMIT 500)
"releases"."author_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136975

UPDATE
    "releases"
SET
    "author_id" = 1
WHERE ("releases"."id") IN (
    SELECT
        "releases"."id"
    FROM
        "releases"
    WHERE
        "releases"."author_id" = 11165152
    LIMIT 500)
 ModifyTable on public.protected_tag_create_access_levels  (cost=5.01..11.48 rows=0 width=0) (actual time=2.091..2.095 rows=0 loops=1)
   Buffers: shared hit=3 read=3
   I/O Timings: read=1.956 write=0.000
   ->  Nested Loop  (cost=5.01..11.48 rows=2 width=38) (actual time=2.089..2.091 rows=0 loops=1)
         Buffers: shared hit=3 read=3
         I/O Timings: read=1.956 write=0.000
         ->  Unique  (cost=4.59..4.60 rows=2 width=32) (actual time=2.087..2.089 rows=0 loops=1)
               Buffers: shared hit=3 read=3
               I/O Timings: read=1.956 write=0.000
               ->  Sort  (cost=4.59..4.59 rows=2 width=32) (actual time=2.086..2.088 rows=0 loops=1)
                     Sort Key: "ANY_subquery".id
                     Sort Method: quicksort  Memory: 25kB
                     Buffers: shared hit=3 read=3
                     I/O Timings: read=1.956 write=0.000
                     ->  Subquery Scan on ANY_subquery  (cost=0.42..4.58 rows=2 width=32) (actual time=2.042..2.043 rows=0 loops=1)
                           Buffers: shared read=3
                           I/O Timings: read=1.956 write=0.000
                           ->  Limit  (cost=0.42..4.56 rows=2 width=4) (actual time=2.041..2.042 rows=0 loops=1)
                                 Buffers: shared read=3
                                 I/O Timings: read=1.956 write=0.000
                                 ->  Index Scan using index_protected_tag_create_access_levels_on_user_id on public.protected_tag_create_access_levels protected_tag_create_access_levels_1  (cost=0.42..4.56 rows=2 width=4) (actual time=2.037..2.038 rows=0 loops=1)
                                       Index Cond: (protected_tag_create_access_levels_1.user_id = 11165152)
                                       Buffers: shared read=3
                                       I/O Timings: read=1.956 write=0.000
         ->  Index Scan using protected_tag_create_access_levels_pkey on public.protected_tag_create_access_levels  (cost=0.42..3.44 rows=1 width=10) (actual time=0.000..0.000 rows=0 loops=0)
               Index Cond: (protected_tag_create_access_levels.id = "ANY_subquery".id)
               I/O Timings: read=0.000 write=0.000
Settings: seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off', random_page_cost = '1.5'
Time: 3.864 ms  
  - planning: 1.599 ms  
  - execution: 2.265 ms  
    - I/O read: 1.956 ms  
    - I/O write: 0.000 ms  
  
Shared buffers:  
  - hits: 3 (~24.00 KiB) from the buffer pool  
  - reads: 3 (~24.00 KiB) from the OS file cache, including disk I/O  
  - dirtied: 0  
  - writes: 0  
"resource_iteration_events"."user_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136976

UPDATE
    "resource_iteration_events"
SET
    "user_id" = 1
WHERE ("resource_iteration_events"."id") IN (
    SELECT
        "resource_iteration_events"."id"
    FROM
        "resource_iteration_events"
    WHERE
        "resource_iteration_events"."user_id" = 11165152
    LIMIT 500)
ModifyTable on public.resource_iteration_events  (cost=166.42..671.51 rows=0 width=0) (actual time=5.032..5.035 rows=0 loops=1)
   Buffers: shared hit=3 read=3
   I/O Timings: read=4.943 write=0.000
   ->  Nested Loop  (cost=166.42..671.51 rows=146 width=46) (actual time=5.030..5.033 rows=0 loops=1)
         Buffers: shared hit=3 read=3
         I/O Timings: read=4.943 write=0.000
         ->  HashAggregate  (cost=165.99..167.45 rows=146 width=40) (actual time=5.029..5.032 rows=0 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=3 read=3
               I/O Timings: read=4.943 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.43..165.62 rows=146 width=40) (actual time=5.025..5.027 rows=0 loops=1)
                     Buffers: shared hit=3 read=3
                     I/O Timings: read=4.943 write=0.000
                     ->  Limit  (cost=0.43..164.16 rows=146 width=8) (actual time=5.024..5.024 rows=0 loops=1)
                           Buffers: shared hit=3 read=3
                           I/O Timings: read=4.943 write=0.000
                           ->  Index Scan using index_resource_iteration_events_on_user_id on public.resource_iteration_events resource_iteration_events_1  (cost=0.43..164.16 rows=146 width=8) (actual time=5.022..5.022 rows=0 loops=1)
                                 Index Cond: (resource_iteration_events_1.user_id = 11165152)
                                 Buffers: shared hit=3 read=3
                                 I/O Timings: read=4.943 write=0.000
         ->  Index Scan using resource_iteration_events_pkey on public.resource_iteration_events  (cost=0.43..3.45 rows=1 width=14) (actual time=0.000..0.000 rows=0 loops=0)
               Index Cond: (resource_iteration_events.id = "ANY_subquery".id)
               I/O Timings: read=0.000 write=0.000
Settings: random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off'
Time: 6.411 ms  
  - planning: 1.242 ms  
  - execution: 5.169 ms  
    - I/O read: 4.943 ms  
    - I/O write: 0.000 ms  
  
Shared buffers:  
  - hits: 3 (~24.00 KiB) from the buffer pool  
  - reads: 3 (~24.00 KiB) from the OS file cache, including disk I/O  
  - dirtied: 0  
  - writes: 0 
"resource_label_events"."user_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136977

UPDATE
    "resource_label_events"
SET
    "user_id" = 1
WHERE ("resource_label_events"."id") IN (
    SELECT
        "resource_label_events"."id"
    FROM
        "resource_label_events"
    WHERE
        "resource_label_events"."user_id" = 11165152
    LIMIT 500)
ModifyTable on public.resource_label_events  (cost=716.95..2517.63 rows=0 width=0) (actual time=2404.069..2404.074 rows=0 loops=1)
   Buffers: shared hit=17728 read=1800 dirtied=1125 written=1
   WAL: records=4515 fpi=1104 bytes=8060151
   I/O Timings: read=2271.100 write=0.798
   ->  Nested Loop  (cost=716.95..2517.63 rows=500 width=42) (actual time=406.267..898.109 rows=500 loops=1)
         Buffers: shared hit=2058 read=728 dirtied=43
         WAL: records=48 fpi=43 bytes=347414
         I/O Timings: read=870.293 write=0.000
         ->  HashAggregate  (cost=716.38..721.38 rows=500 width=40) (actual time=401.771..403.107 rows=500 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared read=286 dirtied=43
               WAL: records=48 fpi=43 bytes=347414
               I/O Timings: read=387.336 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.57..715.13 rows=500 width=40) (actual time=11.857..400.902 rows=500 loops=1)
                     Buffers: shared read=286 dirtied=43
                     WAL: records=48 fpi=43 bytes=347414
                     I/O Timings: read=387.336 write=0.000
                     ->  Limit  (cost=0.57..710.13 rows=500 width=8) (actual time=11.838..400.204 rows=500 loops=1)
                           Buffers: shared read=286 dirtied=43
                           WAL: records=48 fpi=43 bytes=347414
                           I/O Timings: read=387.336 write=0.000
                           ->  Index Scan using index_resource_label_events_on_user_id on public.resource_label_events resource_label_events_1  (cost=0.57..4159.97 rows=2931 width=8) (actual time=11.836..399.997 rows=500 loops=1)
                                 Index Cond: (resource_label_events_1.user_id = 11165152)
                                 Buffers: shared read=286 dirtied=43
                                 WAL: records=48 fpi=43 bytes=347414
                                 I/O Timings: read=387.336 write=0.000
         ->  Index Scan using resource_label_events_pkey on public.resource_label_events  (cost=0.57..3.59 rows=1 width=14) (actual time=0.986..0.986 rows=1 loops=500)
               Index Cond: (resource_label_events.id = "ANY_subquery".id)
               Buffers: shared hit=2058 read=442
               I/O Timings: read=482.957 write=0.000
Trigger RI_ConstraintTrigger_c_35213 for constraint fk_rails_fe91ece594: time=6.091 calls=500
Settings: effective_cache_size = '472585MB', jit = 'off', random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB'
Time: 2.413 s  
  - planning: 2.396 ms  
  - execution: 2.411 s  
    - I/O read: 2.271 s  
    - I/O write: 0.798 ms  
  
Shared buffers:  
  - hits: 17728 (~138.50 MiB) from the buffer pool  
  - reads: 1800 (~14.10 MiB) from the OS file cache, including disk I/O  
  - dirtied: 1125 (~8.80 MiB)  
  - writes: 1 (~8.00 KiB)
"resource_milestone_events"."user_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136978

UPDATE
    "resource_milestone_events"
SET
    "user_id" = 1
WHERE ("resource_milestone_events"."id") IN (
    SELECT
        "resource_milestone_events"."id"
    FROM
        "resource_milestone_events"
    WHERE
        "resource_milestone_events"."user_id" = 11165152
    LIMIT 500)
 ModifyTable on public.resource_milestone_events  (cost=321.92..1142.69 rows=0 width=0) (actual time=2256.088..2256.093 rows=0 loops=1)
   Buffers: shared hit=15696 read=2377 dirtied=1916 written=7
   WAL: records=5004 fpi=1897 bytes=12823257
   I/O Timings: read=2132.222 write=1.112
   ->  Nested Loop  (cost=321.92..1142.69 rows=237 width=46) (actual time=507.894..932.998 rows=500 loops=1)
         Buffers: shared hit=1465 read=1035 dirtied=9
         WAL: records=11 fpi=9 bytes=72603
         I/O Timings: read=910.164 write=0.000
         ->  HashAggregate  (cost=321.49..323.86 rows=237 width=40) (actual time=505.625..507.266 rows=500 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=5 read=495 dirtied=9
               WAL: records=11 fpi=9 bytes=72603
               I/O Timings: read=496.758 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.56..320.89 rows=237 width=40) (actual time=6.570..504.439 rows=500 loops=1)
                     Buffers: shared hit=5 read=495 dirtied=9
                     WAL: records=11 fpi=9 bytes=72603
                     I/O Timings: read=496.758 write=0.000
                     ->  Limit  (cost=0.56..318.52 rows=237 width=8) (actual time=6.550..503.492 rows=500 loops=1)
                           Buffers: shared hit=5 read=495 dirtied=9
                           WAL: records=11 fpi=9 bytes=72603
                           I/O Timings: read=496.758 write=0.000
                           ->  Index Scan using index_resource_milestone_events_on_user_id on public.resource_milestone_events resource_milestone_events_1  (cost=0.56..318.52 rows=237 width=8) (actual time=6.547..503.190 rows=500 loops=1)
                                 Index Cond: (resource_milestone_events_1.user_id = 11165152)
                                 Buffers: shared hit=5 read=495 dirtied=9
                                 WAL: records=11 fpi=9 bytes=72603
                                 I/O Timings: read=496.758 write=0.000
         ->  Index Scan using resource_milestone_events_pkey on public.resource_milestone_events  (cost=0.44..3.46 rows=1 width=14) (actual time=0.847..0.847 rows=1 loops=500)
               Index Cond: (resource_milestone_events.id = "ANY_subquery".id)
               Buffers: shared hit=1460 read=540
               I/O Timings: read=413.406 write=0.000
Trigger RI_ConstraintTrigger_c_34484 for constraint fk_rails_cedf8cce4d: time=6.368 calls=500
Settings: random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off'
Time: 2.264 s  
  - planning: 1.523 ms  
  - execution: 2.263 s  
    - I/O read: 2.132 s  
    - I/O write: 1.112 ms  
  
Shared buffers:  
  - hits: 15696 (~122.60 MiB) from the buffer pool  
  - reads: 2377 (~18.60 MiB) from the OS file cache, including disk I/O  
  - dirtied: 1916 (~15.00 MiB)  
  - writes: 7 (~56.00 KiB)
"resource_state_events"."user_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136979

UPDATE
    "resource_state_events"
SET
    "user_id" = 1
WHERE ("resource_state_events"."id") IN (
    SELECT
        "resource_state_events"."id"
    FROM
        "resource_state_events"
    WHERE
        "resource_state_events"."user_id" = 11165152
    LIMIT 500)
ModifyTable on public.resource_state_events  (cost=635.53..2434.96 rows=0 width=0) (actual time=3890.054..3890.058 rows=0 loops=1)
   Buffers: shared hit=15625 read=3234 dirtied=2042 written=2
   WAL: records=4465 fpi=1962 bytes=14929059
   I/O Timings: read=3719.445 write=0.107
   ->  Nested Loop  (cost=635.53..2434.96 rows=500 width=46) (actual time=790.672..1646.091 rows=500 loops=1)
         Buffers: shared hit=1693 read=1295 dirtied=5
         WAL: records=9 fpi=5 bytes=41103
         I/O Timings: read=1613.562 write=0.000
         ->  HashAggregate  (cost=634.96..639.96 rows=500 width=40) (actual time=786.424..788.194 rows=500 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=3 read=485 dirtied=5
               WAL: records=9 fpi=5 bytes=41103
               I/O Timings: read=771.572 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.57..633.71 rows=500 width=40) (actual time=13.366..784.115 rows=500 loops=1)
                     Buffers: shared hit=3 read=485 dirtied=5
                     WAL: records=9 fpi=5 bytes=41103
                     I/O Timings: read=771.572 write=0.000
                     ->  Limit  (cost=0.57..628.71 rows=500 width=8) (actual time=13.342..782.594 rows=500 loops=1)
                           Buffers: shared hit=3 read=485 dirtied=5
                           WAL: records=9 fpi=5 bytes=41103
                           I/O Timings: read=771.572 write=0.000
                           ->  Index Scan using index_resource_state_events_on_user_id on public.resource_state_events resource_state_events_1  (cost=0.57..1034.48 rows=823 width=8) (actual time=13.340..782.107 rows=500 loops=1)
                                 Index Cond: (resource_state_events_1.user_id = 11165152)
                                 Buffers: shared hit=3 read=485 dirtied=5
                                 WAL: records=9 fpi=5 bytes=41103
                                 I/O Timings: read=771.572 write=0.000
         ->  Index Scan using resource_state_events_pkey on public.resource_state_events  (cost=0.57..3.59 rows=1 width=14) (actual time=1.710..1.710 rows=1 loops=500)
               Index Cond: (resource_state_events.id = "ANY_subquery".id)
               Buffers: shared hit=1690 read=810
               I/O Timings: read=841.990 write=0.000
Trigger RI_ConstraintTrigger_c_35119 for constraint fk_rails_f5827a7ccd: time=10.594 calls=500
Settings: random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off'
Time: 3.903 s  
  - planning: 1.778 ms  
  - execution: 3.901 s  
    - I/O read: 3.719 s  
    - I/O write: 0.107 ms  
  
Shared buffers:  
  - hits: 15625 (~122.10 MiB) from the buffer pool  
  - reads: 3234 (~25.30 MiB) from the OS file cache, including disk I/O  
  - dirtied: 2042 (~16.00 MiB)  
  - writes: 2 (~16.00 KiB)  
"snippets"."author_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136980

UPDATE
    "snippets"
SET
    "author_id" = 1
WHERE ("snippets"."id") IN (
    SELECT
        "snippets"."id"
    FROM
        "snippets"
    WHERE
        "snippets"."author_id" = 11165152
    LIMIT 500)
ModifyTable on public.snippets  (cost=9.88..26.72 rows=0 width=0) (actual time=2.345..2.348 rows=0 loops=1)
   Buffers: shared read=3
   I/O Timings: read=2.270 write=0.000
   ->  Nested Loop  (cost=9.88..26.72 rows=5 width=38) (actual time=2.343..2.346 rows=0 loops=1)
         Buffers: shared read=3
         I/O Timings: read=2.270 write=0.000
         ->  HashAggregate  (cost=9.45..9.50 rows=5 width=32) (actual time=2.342..2.344 rows=0 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared read=3
               I/O Timings: read=2.270 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.42..9.44 rows=5 width=32) (actual time=2.338..2.340 rows=0 loops=1)
                     Buffers: shared read=3
                     I/O Timings: read=2.270 write=0.000
                     ->  Limit  (cost=0.42..9.39 rows=5 width=4) (actual time=2.338..2.338 rows=0 loops=1)
                           Buffers: shared read=3
                           I/O Timings: read=2.270 write=0.000
                           ->  Index Scan using index_snippets_on_author_id on public.snippets snippets_1  (cost=0.42..9.39 rows=5 width=4) (actual time=2.336..2.336 rows=0 loops=1)
                                 Index Cond: (snippets_1.author_id = 11165152)
                                 Buffers: shared read=3
                                 I/O Timings: read=2.270 write=0.000
         ->  Index Scan using index_snippets_on_id_and_type on public.snippets  (cost=0.42..3.44 rows=1 width=10) (actual time=0.000..0.000 rows=0 loops=0)
               Index Cond: (snippets.id = "ANY_subquery".id)
               I/O Timings: read=0.000 write=0.000
Settings: seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '472585MB', jit = 'off', random_page_cost = '1.5'
Time: 4.296 ms  
  - planning: 1.792 ms  
  - execution: 2.504 ms  
    - I/O read: 2.270 ms  
    - I/O write: 0.000 ms  
  
Shared buffers:  
  - hits: 0 from the buffer pool  
  - reads: 3 (~24.00 KiB) from the OS file cache, including disk I/O  
  - dirtied: 0  
  - writes: 0  
"timelogs"."user_id"

https://postgres.ai/console/gitlab/gitlab-production-main/sessions/44640/commands/136981

UPDATE
    "timelogs"
SET
    "user_id" = 1
WHERE ("timelogs"."id") IN (
    SELECT
        "timelogs"."id"
    FROM
        "timelogs"
    WHERE
        "timelogs"."user_id" = 11165152
    LIMIT 500)
ModifyTable on public.timelogs  (cost=173.50..560.59 rows=0 width=0) (actual time=112.443..112.448 rows=0 loops=1)
   Buffers: shared hit=109 read=45 dirtied=19
   WAL: records=33 fpi=18 bytes=107720
   I/O Timings: read=106.223 write=0.000
   ->  Nested Loop  (cost=173.50..560.59 rows=112 width=38) (actual time=21.937..22.521 rows=3 loops=1)
         Buffers: shared hit=9 read=9
         I/O Timings: read=22.144 write=0.000
         ->  HashAggregate  (cost=173.07..174.19 rows=112 width=32) (actual time=15.496..15.509 rows=3 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared read=6
               I/O Timings: read=15.259 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.43..172.79 rows=112 width=32) (actual time=10.717..15.477 rows=3 loops=1)
                     Buffers: shared read=6
                     I/O Timings: read=15.259 write=0.000
                     ->  Limit  (cost=0.43..171.67 rows=112 width=4) (actual time=10.679..15.426 rows=3 loops=1)
                           Buffers: shared read=6
                           I/O Timings: read=15.259 write=0.000
                           ->  Index Scan using index_timelogs_on_user_id on public.timelogs timelogs_1  (cost=0.43..171.67 rows=112 width=4) (actual time=10.677..15.420 rows=3 loops=1)
                                 Index Cond: (timelogs_1.user_id = 11165152)
                                 Buffers: shared read=6
                                 I/O Timings: read=15.259 write=0.000
         ->  Index Scan using timelogs_pkey on public.timelogs  (cost=0.43..3.45 rows=1 width=10) (actual time=2.326..2.326 rows=1 loops=3)
               Index Cond: (timelogs.id = "ANY_subquery".id)
               Buffers: shared hit=9 read=3
               I/O Timings: read=6.885 write=0.000
Settings: effective_cache_size = '472585MB', jit = 'off', random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB'
Time: 113.982 ms  
  - planning: 1.357 ms  
  - execution: 112.625 ms  
    - I/O read: 106.223 ms  
    - I/O write: 0.000 ms  
  
Shared buffers:  
  - hits: 109 (~872.00 KiB) from the buffer pool  
  - reads: 45 (~360.00 KiB) from the OS file cache, including disk I/O  
  - dirtied: 19 (~152.00 KiB)  
  - writes: 0 
"vulnerabilities"."author_id"

https://postgres.ai/console/gitlab/gitlab-production-sec/sessions/44647/commands/137004

UPDATE
    "vulnerabilities"
SET
    "author_id" = 1
WHERE ("vulnerabilities"."id") IN (
    SELECT
        "vulnerabilities"."id"
    FROM
        "vulnerabilities"
    WHERE
        "vulnerabilities"."author_id" = 11165152
    LIMIT 500)
 ModifyTable on public.vulnerabilities  (cost=671.27..2469.45 rows=0 width=0) (actual time=38.699..38.702 rows=0 loops=1)
   Buffers: shared hit=94 read=60 dirtied=15
   WAL: records=26 fpi=14 bytes=67714
   I/O Timings: read=37.838 write=0.000
   ->  Nested Loop  (cost=671.27..2469.45 rows=500 width=46) (actual time=7.320..7.340 rows=2 loops=1)
         Buffers: shared hit=10 read=9
         I/O Timings: read=7.215 write=0.000
         ->  HashAggregate  (cost=670.70..675.70 rows=500 width=40) (actual time=5.624..5.630 rows=2 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=3 read=6
               I/O Timings: read=5.547 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.57..669.45 rows=500 width=40) (actual time=4.873..5.614 rows=2 loops=1)
                     Buffers: shared hit=3 read=6
                     I/O Timings: read=5.547 write=0.000
                     ->  Limit  (cost=0.57..664.45 rows=500 width=8) (actual time=4.859..5.598 rows=2 loops=1)
                           Buffers: shared hit=3 read=6
                           I/O Timings: read=5.547 write=0.000
                           ->  Index Scan using index_vulnerabilities_on_author_id on public.vulnerabilities vulnerabilities_1  (cost=0.57..3537.71 rows=2664 width=8) (actual time=4.857..5.595 rows=2 loops=1)
                                 Index Cond: (vulnerabilities_1.author_id = 11165152)
                                 Buffers: shared hit=3 read=6
                                 I/O Timings: read=5.547 write=0.000
         ->  Index Scan using vulnerabilities_pkey on public.vulnerabilities  (cost=0.57..3.59 rows=1 width=14) (actual time=0.850..0.850 rows=1 loops=2)
               Index Cond: (vulnerabilities.id = "ANY_subquery".id)
               Buffers: shared hit=7 read=3
               I/O Timings: read=1.669 write=0.000
Settings: random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '338688MB', jit = 'off'
Time: 40.491 ms  
  - planning: 1.662 ms  
  - execution: 38.829 ms  
    - I/O read: 37.838 ms  
    - I/O write: 0.000 ms  
  
Shared buffers:  
  - hits: 94 (~752.00 KiB) from the buffer pool  
  - reads: 60 (~480.00 KiB) from the OS file cache, including disk I/O  
  - dirtied: 15 (~120.00 KiB)  
  - writes: 0  
"vulnerabilities"."confirmed_by_id"

https://postgres.ai/console/gitlab/gitlab-production-sec/sessions/44647/commands/137005

UPDATE
    "vulnerabilities"
SET
    "confirmed_by_id" = 1
WHERE ("vulnerabilities"."id") IN (
    SELECT
        "vulnerabilities"."id"
    FROM
        "vulnerabilities"
    WHERE
        "vulnerabilities"."confirmed_by_id" = 11165152
    LIMIT 500)
ModifyTable on public.vulnerabilities  (cost=597.03..2395.21 rows=0 width=0) (actual time=1.262..1.264 rows=0 loops=1)
   Buffers: shared hit=4 read=3
   I/O Timings: read=1.216 write=0.000
   ->  Nested Loop  (cost=597.03..2395.21 rows=500 width=46) (actual time=1.261..1.262 rows=0 loops=1)
         Buffers: shared hit=4 read=3
         I/O Timings: read=1.216 write=0.000
         ->  HashAggregate  (cost=596.46..601.46 rows=500 width=40) (actual time=1.260..1.261 rows=0 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=4 read=3
               I/O Timings: read=1.216 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.57..595.21 rows=500 width=40) (actual time=1.254..1.255 rows=0 loops=1)
                     Buffers: shared hit=4 read=3
                     I/O Timings: read=1.216 write=0.000
                     ->  Limit  (cost=0.57..590.21 rows=500 width=8) (actual time=1.253..1.253 rows=0 loops=1)
                           Buffers: shared hit=4 read=3
                           I/O Timings: read=1.216 write=0.000
                           ->  Index Scan using index_vulnerabilities_on_confirmed_by_id on public.vulnerabilities vulnerabilities_1  (cost=0.57..1083.16 rows=918 width=8) (actual time=1.251..1.251 rows=0 loops=1)
                                 Index Cond: (vulnerabilities_1.confirmed_by_id = 11165152)
                                 Buffers: shared hit=4 read=3
                                 I/O Timings: read=1.216 write=0.000
         ->  Index Scan using vulnerabilities_pkey on public.vulnerabilities  (cost=0.57..3.59 rows=1 width=14) (actual time=0.000..0.000 rows=0 loops=0)
               Index Cond: (vulnerabilities.id = "ANY_subquery".id)
               I/O Timings: read=0.000 write=0.000
Settings: effective_cache_size = '338688MB', jit = 'off', random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB'
Time: 3.040 ms  
  - planning: 1.655 ms  
  - execution: 1.385 ms  
    - I/O read: 1.216 ms  
    - I/O write: 0.000 ms  
  
Shared buffers:  
  - hits: 4 (~32.00 KiB) from the buffer pool  
  - reads: 3 (~24.00 KiB) from the OS file cache, including disk I/O  
  - dirtied: 0  
  - writes: 0 
"vulnerabilities"."dismissed_by_id"

https://postgres.ai/console/gitlab/gitlab-production-sec/sessions/44647/commands/137006

UPDATE
    "vulnerabilities"
SET
    "dismissed_by_id" = 1
WHERE ("vulnerabilities"."id") IN (
    SELECT
        "vulnerabilities"."id"
    FROM
        "vulnerabilities"
    WHERE
        "vulnerabilities"."dismissed_by_id" = 11165152
    LIMIT 500)
ModifyTable on public.vulnerabilities  (cost=763.71..2561.89 rows=0 width=0) (actual time=2.976..2.978 rows=0 loops=1)
   Buffers: shared hit=4 read=3
   I/O Timings: read=2.927 write=0.000
   ->  Nested Loop  (cost=763.71..2561.89 rows=500 width=46) (actual time=2.975..2.976 rows=0 loops=1)
         Buffers: shared hit=4 read=3
         I/O Timings: read=2.927 write=0.000
         ->  HashAggregate  (cost=763.14..768.14 rows=500 width=40) (actual time=2.974..2.975 rows=0 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=4 read=3
               I/O Timings: read=2.927 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.57..761.89 rows=500 width=40) (actual time=2.969..2.970 rows=0 loops=1)
                     Buffers: shared hit=4 read=3
                     I/O Timings: read=2.927 write=0.000
                     ->  Limit  (cost=0.57..756.89 rows=500 width=8) (actual time=2.968..2.969 rows=0 loops=1)
                           Buffers: shared hit=4 read=3
                           I/O Timings: read=2.927 write=0.000
                           ->  Index Scan using index_vulnerabilities_on_dismissed_by_id on public.vulnerabilities vulnerabilities_1  (cost=0.57..1608.50 rows=1063 width=8) (actual time=2.967..2.967 rows=0 loops=1)
                                 Index Cond: (vulnerabilities_1.dismissed_by_id = 11165152)
                                 Buffers: shared hit=4 read=3
                                 I/O Timings: read=2.927 write=0.000
         ->  Index Scan using vulnerabilities_pkey on public.vulnerabilities  (cost=0.57..3.59 rows=1 width=14) (actual time=0.000..0.000 rows=0 loops=0)
               Index Cond: (vulnerabilities.id = "ANY_subquery".id)
               I/O Timings: read=0.000 write=0.000
Settings: jit = 'off', random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '338688MB'
Time: 4.749 ms  
  - planning: 1.653 ms  
  - execution: 3.096 ms  
    - I/O read: 2.927 ms  
    - I/O write: 0.000 ms  
  
Shared buffers:  
  - hits: 4 (~32.00 KiB) from the buffer pool  
  - reads: 3 (~24.00 KiB) from the OS file cache, including disk I/O  
  - dirtied: 0  
  - writes: 0  
"vulnerabilities"."resolved_by_id"

https://postgres.ai/console/gitlab/gitlab-production-sec/sessions/44647/commands/137007

UPDATE
    "vulnerabilities"
SET
    "resolved_by_id" = 1
WHERE ("vulnerabilities"."id") IN (
    SELECT
        "vulnerabilities"."id"
    FROM
        "vulnerabilities"
    WHERE
        "vulnerabilities"."resolved_by_id" = 11165152
    LIMIT 500)
 ModifyTable on public.vulnerabilities  (cost=431.81..2229.99 rows=0 width=0) (actual time=2.271..2.273 rows=0 loops=1)
   Buffers: shared hit=4 read=3
   I/O Timings: read=2.204 write=0.000
   ->  Nested Loop  (cost=431.81..2229.99 rows=500 width=46) (actual time=2.269..2.271 rows=0 loops=1)
         Buffers: shared hit=4 read=3
         I/O Timings: read=2.204 write=0.000
         ->  HashAggregate  (cost=431.24..436.24 rows=500 width=40) (actual time=2.268..2.270 rows=0 loops=1)
               Group Key: "ANY_subquery".id
               Buffers: shared hit=4 read=3
               I/O Timings: read=2.204 write=0.000
               ->  Subquery Scan on ANY_subquery  (cost=0.57..429.99 rows=500 width=40) (actual time=2.263..2.264 rows=0 loops=1)
                     Buffers: shared hit=4 read=3
                     I/O Timings: read=2.204 write=0.000
                     ->  Limit  (cost=0.57..424.99 rows=500 width=8) (actual time=2.262..2.262 rows=0 loops=1)
                           Buffers: shared hit=4 read=3
                           I/O Timings: read=2.204 write=0.000
                           ->  Index Scan using index_vulnerabilities_on_resolved_by_id on public.vulnerabilities vulnerabilities_1  (cost=0.57..935.15 rows=1101 width=8) (actual time=2.260..2.260 rows=0 loops=1)
                                 Index Cond: (vulnerabilities_1.resolved_by_id = 11165152)
                                 Buffers: shared hit=4 read=3
                                 I/O Timings: read=2.204 write=0.000
         ->  Index Scan using vulnerabilities_pkey on public.vulnerabilities  (cost=0.57..3.59 rows=1 width=14) (actual time=0.000..0.000 rows=0 loops=0)
               Index Cond: (vulnerabilities.id = "ANY_subquery".id)
               I/O Timings: read=0.000 write=0.000
Settings: random_page_cost = '1.5', seq_page_cost = '4', work_mem = '100MB', effective_cache_size = '338688MB', jit = 'off'
Time: 4.080 ms  
  - planning: 1.680 ms  
  - execution: 2.400 ms  
    - I/O read: 2.204 ms  
    - I/O write: 0.000 ms  
  
Shared buffers:  
  - hits: 4 (~32.00 KiB) from the buffer pool  
  - reads: 3 (~24.00 KiB) from the OS file cache, including disk I/O  
  - dirtied: 0  
  - writes: 0

Notes:

This update solely introduces the new reassign process. A subsequent MR will add throttling logic similar to the current reassign process.

Follow-up issues will modify importers to avoid creating placeholder references and will delete references that are no longer needed:

  1. Update importers to not create placeholder refe... (#575651)
  2. Delete placeholder references that are no longe... (#575649)

References

#521450

Screenshots or screen recordings

Before After

How to set up and validate locally

  1. Enable the user_mapping_direct_reassignment FF
  2. Stage a direct transfer migration of a group with projects on your localhost. Ensure the group and/or project has contributions (MR, issues, etc).
  3. After the import, reassign some of the contributions of a source user to a real user, and then as that user, accept the contribution assignment.
  4. During the contribution reassignment process, the user should become the owner of the contributions.
  5. Verify in the log/importer.log no occurrences of the log Placeholder references used for model/attributes that the DirectReassign service should have reassigned.

MR acceptance checklist

Evaluate this MR against the MR acceptance checklist. It helps you analyze changes to reduce risks in quality, performance, reliability, security, and maintainability.

Edited by Rodrigo Tomonari

Merge request reports

Loading