Skip to content

Split `Explore topics` page into two tabs

Jonas Wälter requested to merge siemens/gitlab:explore-topics-page-tabs into master

What does this MR do and why?

This MR is the last step of the issue Explore topics: improve counter consistency (#351115). This MR splits the Explore topics page into to tabs:

  • Tab "All": topics ordered by number of assigned non-private projects
  • Tab "Your projects": topics ordered by number of assigned projects of which the current user is a member

(This approach was discussed in #351115 (comment 831936981))

🛠 with at Siemens

/cc @bufferoverflow

Screenshots

Before After
image image image

How to set up and validate locally

  1. Assign some topics to some projects (project settings)
  2. Visit Explore topics page: http://localhost:3000/explore/projects/topics

Database

New query for top topics for user (using user id=120073 @DylanGriffith)

1st page

SELECT DISTINCT "topics"."total_projects_count" AS alias_0,
                "topics"."id"                   AS alias_1,
                "topics"."id"
FROM   "topics"
       LEFT OUTER JOIN "project_topics"
                    ON "project_topics"."topic_id" = "topics"."id"
WHERE  "project_topics"."project_id" IN (SELECT DISTINCT project_id
                                         FROM   "project_authorizations"
                                         WHERE
       "project_authorizations"."user_id" = 120073)
ORDER  BY "topics"."total_projects_count" DESC,
          "topics"."id" ASC
LIMIT  21 offset 0
Query plan
 Limit  (cost=16880.62..16880.67 rows=21 width=24) (actual time=3267.519..3267.528 rows=21 loops=1)
   Buffers: shared hit=23401 read=4346 dirtied=668
   I/O Timings: read=3101.519 write=0.000
   ->  Sort  (cost=16880.62..16939.19 rows=23431 width=24) (actual time=3267.516..3267.521 rows=21 loops=1)
         Sort Key: topics.total_projects_count DESC, topics.id
         Sort Method: top-N heapsort  Memory: 27kB
         Buffers: shared hit=23401 read=4346 dirtied=668
         I/O Timings: read=3101.519 write=0.000
         ->  HashAggregate  (cost=16014.57..16248.88 rows=23431 width=24) (actual time=3267.149..3267.364 rows=294 loops=1)
               Group Key: topics.total_projects_count, topics.id, topics.id
               Buffers: shared hit=23398 read=4346 dirtied=668
               I/O Timings: read=3101.519 write=0.000
               ->  Hash Join  (cost=6143.58..15838.84 rows=23431 width=24) (actual time=866.921..3266.190 rows=620 loops=1)
                     Hash Cond: (project_topics.topic_id = topics.id)
                     Buffers: shared hit=23398 read=4346 dirtied=668
                     I/O Timings: read=3101.519 write=0.000
                     ->  Nested Loop  (cost=1.00..9634.75 rows=23431 width=8) (actual time=7.478..2405.640 rows=620 loops=1)
                           Buffers: shared hit=21363 read=3035 dirtied=485
                           I/O Timings: read=2326.704 write=0.000
                           ->  Unique  (cost=0.57..906.41 rows=6791 width=4) (actual time=4.688..645.918 rows=6767 loops=1)
                                 Buffers: shared hit=3052 read=1013 dirtied=478
                                 I/O Timings: read=602.691 write=0.000
                                 ->  Index Only Scan using project_authorizations_pkey on public.project_authorizations  (cost=0.57..889.20 rows=6886 width=4) (actual time=4.686..640.754 rows=6767 loops=1)
                                       Index Cond: (project_authorizations.user_id = 1)
                                       Heap Fetches: 1158
                                       Buffers: shared hit=3052 read=1013 dirtied=478
                                       I/O Timings: read=602.691 write=0.000
                           ->  Index Only Scan using index_project_topics_on_project_id_and_topic_id on public.project_topics  (cost=0.42..1.25 rows=3 width=16) (actual time=0.258..0.259 rows=0 loops=6767)
                                 Index Cond: (project_topics.project_id = project_authorizations.project_id)
                                 Heap Fetches: 49
                                 Buffers: shared hit=18311 read=2022 dirtied=7
                                 I/O Timings: read=1724.013 write=0.000
                     ->  Hash  (cost=4395.76..4395.76 rows=139745 width=16) (actual time=857.812..857.813 rows=117679 loops=1)
                           Buckets: 262144  Batches: 1  Memory Usage: 7565kB
                           Buffers: shared hit=2035 read=1311 dirtied=183
                           I/O Timings: read=774.815 write=0.000
                           ->  Index Only Scan using index_topics_total_projects_count on public.topics  (cost=0.42..4395.76 rows=139745 width=16) (actual time=2.089..820.656 rows=117679 loops=1)
                                 Heap Fetches: 6405
                                 Buffers: shared hit=2035 read=1311 dirtied=183
                                 I/O Timings: read=774.815 write=0.000
Time: 3.273 s
  - planning: 4.055 ms
  - execution: 3.269 s
    - I/O read: 3.102 s
    - I/O write: 0.000 ms

Shared buffers:
  - hits: 23401 (~182.80 MiB) from the buffer pool
  - reads: 4346 (~34.00 MiB) from the OS file cache, including disk I/O
  - dirtied: 668 (~5.20 MiB)
  - writes: 0

️ Query processes too much data to return a relatively small number of rows. – Reduce data cardinality as early as possible during the execution, using one or several of the following techniques: new indexes, partitioning, query rewriting, denormalization.

MR acceptance checklist

This checklist encourages us to confirm any changes have been analyzed to reduce risks in quality, performance, reliability, security, and maintainability.

Edited by Dylan Griffith

Merge request reports