Skip to content

Add all necessary bigint indexes on int to bigint conversion

What does this MR do and why?

Reasoning for this MR can be found here: #430139 (closed)

TL;DR -> Creates a helper method to add bigint indexes for corresponding integer column indexes, this will be used during PK bigint conversion.

How to set up and validate locally

  1. Create a new test table and add indexes.

    CREATE TABLE temp_dummy (name varchar(40), token integer, token_type text);
    
    CREATE INDEX "hash_idx_temp_dummy" on temp_dummy using hash (token);
    CREATE INDEX "idx_temp_dummy" on temp_dummy (token);
    CREATE INDEX "idx_temp_dummy_token_type" on temp_dummy (lower(token_type));
    CREATE INDEX "idx_temp_dummy_combined" on temp_dummy (token, lower(name::text)) WHERE token IS NOT NULL;
    CREATE INDEX "idx_temp_dummy_functional" on temp_dummy (token, lower(name::text));
    CREATE INDEX "idx_temp_dummy_ordered" on temp_dummy (token DESC NULLS LAST);
    CREATE INDEX "idx_temp_dummy_ordered_multiple" on temp_dummy (token DESC NULLS LAST, name DESC);
    CREATE INDEX "idx_temp_dummy_partial" on temp_dummy (token) WHERE token IS NOT NULL;
    CREATE UNIQUE INDEX "uniq_idx_temp_dummy" on temp_dummy (token);
  2. Add bigint column for which the new index will be created.

    ALTER TABLE temp_dummy ADD COLUMN token_convert_to_bigint bigint;
  3. The table now will look like

    gitlabhq_development=# \d temp_dummy
                                Table "public.temp_dummy"
             Column          |         Type          | Collation | Nullable | Default 
    -------------------------+-----------------------+-----------+----------+---------
     name                    | character varying(40) |           |          | 
     token                   | integer               |           |          | 
     token_type              | text                  |           |          | 
     token_convert_to_bigint | bigint                |           |          | 
    Indexes:
        "hash_idx_temp_dummy" hash (token)
        "idx_temp_dummy" btree (token)
        "idx_temp_dummy_combined" btree (token, lower(name::text)) WHERE token IS NOT NULL
        "idx_temp_dummy_functional" btree (token, lower(name::text))
        "idx_temp_dummy_ordered" btree (token DESC NULLS LAST)
        "idx_temp_dummy_ordered_multiple" btree (token DESC NULLS LAST, name DESC)
        "idx_temp_dummy_partial" btree (token) WHERE token IS NOT NULL
        "idx_temp_dummy_token_type" btree (lower(token_type))
        "uniq_idx_temp_dummy" UNIQUE, btree (token)
  4. Create a migration to add bigint indexes

    bundle exec rails g migration AddBigIntIndexesOnTempDummy
    # frozen_string_literal: true
    
    class AddBigIntIndexesOnTempDummy < Gitlab::Database::Migration[2.2]
      include Gitlab::Database::MigrationHelpers::ConvertToBigint
    
      milestone '16.6'
    
      disable_ddl_transaction!
      def change
        add_bigint_column_indexes('temp_dummy', 'token')
      end
    end
  5. Run the above migration.

  6. Should have created a new indexes with same params as of the existing indexes that use the specified column .

     gitlabhq_development=# \d temp_dummy
                                 Table "public.temp_dummy"
              Column          |         Type          | Collation | Nullable | Default 
     -------------------------+-----------------------+-----------+----------+---------
      name                    | character varying(40) |           |          | 
      token                   | integer               |           |          | 
      token_type              | text                  |           |          | 
      token_convert_to_bigint | bigint                |           |          | 
     Indexes:
         "bigint_idx_019cc0d4609dd912802a" btree (token_convert_to_bigint)
         "bigint_idx_376c82f5bbdc8a631ebc" btree (token_convert_to_bigint, lower(name::text)) WHERE token_convert_to_bigint IS NOT NULL
         "bigint_idx_4f5a76c4d27335ae2d16" btree (token_convert_to_bigint DESC NULLS LAST, name DESC)
         "bigint_idx_5a5fccb9f0b8fa34bf52" btree (token_convert_to_bigint DESC NULLS LAST)
         "bigint_idx_5ecef3aedbb6694161bc" btree (token_convert_to_bigint, lower(name::text))
         "bigint_idx_67d42b8f12a9034be687" btree (token_convert_to_bigint) WHERE token_convert_to_bigint IS NOT NULL
         "bigint_idx_97093422f3543a89f4df" hash (token_convert_to_bigint)
         "bigint_idx_b1fd1671e1b3a8d38b27" UNIQUE, btree (token_convert_to_bigint)
         "hash_idx_temp_dummy" hash (token)
         "idx_temp_dummy" btree (token)
         "idx_temp_dummy_combined" btree (token, lower(name::text)) WHERE token IS NOT NULL
         "idx_temp_dummy_functional" btree (token, lower(name::text))
         "idx_temp_dummy_ordered" btree (token DESC NULLS LAST)
         "idx_temp_dummy_ordered_multiple" btree (token DESC NULLS LAST, name DESC)
         "idx_temp_dummy_partial" btree (token) WHERE token IS NOT NULL
         "idx_temp_dummy_token_type" btree (lower(token_type))
         "uniq_idx_temp_dummy" UNIQUE, btree (token)

MR acceptance checklist

This checklist encourages us to confirm any changes have been analyzed to reduce risks in quality, performance, reliability, security, and maintainability.

Related to #430139 (closed)

Edited by Krasimir Angelov

Merge request reports