Skip to content

#issue-377831: Don't allow duplicate resource links in incidents

What does this MR do and why?

#issue-377831: Don't allow duplicate resource links in incidents

Fixes #377831 This is a rehash of !107310 (diffs)

Raw SQL

> IncidentManagement::IssuableResourceLink.delete([4,5])
  IncidentManagement::IssuableResourceLink Delete All (0.9ms)  DELETE FROM "issuable_resource_links" WHERE "issuable_resource_links"."id" IN (4, 5)

Query plan

gitlabhq_development=# EXPLAIN DELETE FROM "issuable_resource_links" WHERE "issuable_resource_links"."id" IN (4, 5);
                                 QUERY PLAN
-----------------------------------------------------------------------------
 Delete on issuable_resource_links  (cost=0.00..1.01 rows=0 width=0)
   ->  Seq Scan on issuable_resource_links  (cost=0.00..1.01 rows=1 width=6)
         Filter: (id = ANY ('{4,5}'::bigint[]))
(3 rows)

Notes

1st Try error in ./spec/db/schema_spec.rb:204:
Duplicate index: index_unique_issue_id_and_link with ["index_issuable_resource_links_on_issue_id"]
index_unique_issue_id_and_link : [{:name=>"issue_id", :order=>:asc}, {:name=>"link", :order=>:asc}]
index_issuable_resource_links_on_issue_id : [{:name=>"issue_id", :order=>:asc}].
Consider dropping the indexes index_issuable_resource_links_on_issue_id

The index index_issuable_resource_links_on_issue_id is removed in favour of the more comprehensive index_unique_issue_id_and_link as it's a subset of the former - and can be reused by the scheduler. See the example pasted below;

gitlabhq_development=# \d issuable_resource_links
                                       Table "public.issuable_resource_links"
   Column   |           Type           | Collation | Nullable |                       Default
------------+--------------------------+-----------+----------+-----------------------------------------------------
 id         | bigint                   |           | not null | nextval('issuable_resource_links_id_seq'::regclass)
 issue_id   | bigint                   |           | not null |
 link_text  | text                     |           |          |
 link       | text                     |           | not null |
 link_type  | smallint                 |           | not null | 0
 created_at | timestamp with time zone |           | not null |
 updated_at | timestamp with time zone |           | not null |
Indexes:
    "issuable_resource_links_pkey" PRIMARY KEY, btree (id)
    "index_unique_issue_id_and_link" UNIQUE, btree (issue_id, link)
Check constraints:
    "check_67be6729db" CHECK (char_length(link) <= 2200)
    "check_b137147e0b" CHECK (char_length(link_text) <= 255)
Foreign-key constraints:
    "fk_rails_3f0ec6b1cf" FOREIGN KEY (issue_id) REFERENCES issues(id) ON DELETE CASCADE

gitlabhq_development=# EXPLAIN select * from issuable_resource_links where issue_id = 1 AND link = 'foo';
                                                  QUERY PLAN
---------------------------------------------------------------------------------------------------------------
 Index Scan using index_unique_issue_id_and_link on issuable_resource_links  (cost=0.15..2.17 rows=1 width=98)
   Index Cond: ((issue_id = 1) AND (link = 'foo'::text))
(2 rows)

gitlabhq_development=# EXPLAIN select * from issuable_resource_links where issue_id = 1;
                                                  QUERY PLAN
---------------------------------------------------------------------------------------------------------------
 Index Scan using index_unique_issue_id_and_link on issuable_resource_links  (cost=0.15..4.20 rows=3 width=98)
   Index Cond: (issue_id = 1)
(2 rows)

Questions

  • I'm wondering if I need to schedule an index creation for a low-impact time for the migration contained in this MR?
  • The work implemented verifies the URL is not a duplicate during the persistence of the record (i.e #save). A unique index exists at the database layer. This index will allow the database to raise an exception if another record is inserted and doesn't abide by the index. That exception is propagated through the application and caught to be handled gracefully. It would be user-friendly to iterate through the provided URLs within the application and catch any duplicates when the Incident is not yet persisted. Thus avoiding a record validation to succeed and the following record creation to fail. I'm wondering if this is a worthwhile endeavour - i.e is it possible for a record to be validated through the service independently of being created. Looks like not, so it's should be fine to rely on the persistence route to detect duplicates for both cases.

MR acceptance checklist

This checklist encourages us to confirm any changes have been analyzed to reduce risks in quality, performance, reliability, security, and maintainability.

Edited by Tomasz Skorupa

Merge request reports