rspec-ee migration pg13 single-db-ci-connection 2/2
Passed Started
by
@fabiopitino
Fabio Pitino
1Running with gitlab-runner 16.1.0~beta.5.gf131a6a2 (f131a6a2)2 on blue-2.shared-gitlab-org.runners-manager.gitlab.com/default NL4gfoBe, system ID: s_74c3e13161643 feature flags: FF_NETWORK_PER_BUILD:true, FF_USE_FASTZIP:true, FF_USE_IMPROVED_URL_MASKING:true6Using Docker executor with image registry.gitlab.com/gitlab-org/gitlab-build-images/debian-bullseye-ruby-3.0.patched-golang-1.19-rust-1.65-node-18.16-postgresql-13:rubygems-3.4-git-2.36-lfs-2.9-chrome-113-yarn-1.22-graphicsmagick-1.3.36 ...7Starting service registry.gitlab.com/gitlab-org/gitlab-build-images:postgres-13-pgvector-0.4.1 ...8Authenticating with credentials from job payload (GitLab Registry)9Pulling docker image registry.gitlab.com/gitlab-org/gitlab-build-images:postgres-13-pgvector-0.4.1 ...10Using docker image sha256:73740c557807c4bc5d692f263c0e35454270600da4b22bbe952331411426c8b5 for registry.gitlab.com/gitlab-org/gitlab-build-images:postgres-13-pgvector-0.4.1 with digest registry.gitlab.com/gitlab-org/gitlab-build-images@sha256:3174001f839c42e299ac06a42f8ded446edfcb33b0eb820874749a3f53eb799c ...11WARNING: Service registry.gitlab.com/gitlab-org/gitlab-build-images:redis-cluster-6.2.12 is already created. Ignoring.12WARNING: Service registry.gitlab.com/gitlab-org/gitlab-build-images:redis-cluster-6.2.12 is already created. Ignoring.13Starting service registry.gitlab.com/gitlab-org/gitlab-build-images:redis-cluster-6.2.12 ...14Authenticating with credentials from job payload (GitLab Registry)15Pulling docker image registry.gitlab.com/gitlab-org/gitlab-build-images:redis-cluster-6.2.12 ...16Using docker image sha256:a9a90ece30d9630d694ab1997cd103ea8ec729789451b983a75c7b58b0062d45 for registry.gitlab.com/gitlab-org/gitlab-build-images:redis-cluster-6.2.12 with digest registry.gitlab.com/gitlab-org/gitlab-build-images@sha256:7ef36177d5d0bc554fbb63d8210ae751bcc538bea7905b51d078d9ab90a755fa ...17Starting service redis:6.2-alpine ...18Pulling docker image redis:6.2-alpine ...19Using docker image sha256:85fd7bd884b6493c8eb6f4dffbe5406d97cce56aff84f1580a5eb5b9d841f158 for redis:6.2-alpine with digest redis@sha256:87c44d5d9f472e767c8737f4130c765d77bdc95c7472d6427cfc9d4632f12da6 ...20Starting service elasticsearch:7.17.6 ...21Pulling docker image elasticsearch:7.17.6 ...22Using docker image sha256:5fad10241ffd65d817ed0ddfaf6e87eee1f7dc2a7db33db1047835560ea71fda for elasticsearch:7.17.6 with digest elasticsearch@sha256:6c128de5d01c0c130a806022d6bd99b3e4c27a9af5bfc33b6b81861ae117d028 ...23WARNING: Service registry.gitlab.com/gitlab-org/gitlab-build-images:zoekt-ci-image-1.0 is already created. Ignoring.24WARNING: Service registry.gitlab.com/gitlab-org/gitlab-build-images:zoekt-ci-image-1.0 is already created. Ignoring.25Starting service registry.gitlab.com/gitlab-org/gitlab-build-images:zoekt-ci-image-1.0 ...26Authenticating with credentials from job payload (GitLab Registry)27Pulling docker image registry.gitlab.com/gitlab-org/gitlab-build-images:zoekt-ci-image-1.0 ...28Using docker image sha256:4777ec1fa89def7d692d4979d05cb05234df25da1c6a3f67a564a433ec5ba1c8 for registry.gitlab.com/gitlab-org/gitlab-build-images:zoekt-ci-image-1.0 with digest registry.gitlab.com/gitlab-org/gitlab-build-images@sha256:80c0cee4566aefe4f1f287e1091263e08b0ebc41ed3dc4e76930df3634ccb9aa ...29Waiting for services to be up and running (timeout 30 seconds)...30Authenticating with credentials from job payload (GitLab Registry)31Pulling docker image registry.gitlab.com/gitlab-org/gitlab-build-images/debian-bullseye-ruby-3.0.patched-golang-1.19-rust-1.65-node-18.16-postgresql-13:rubygems-3.4-git-2.36-lfs-2.9-chrome-113-yarn-1.22-graphicsmagick-1.3.36 ...32Using docker image sha256:61b59025d0d646cd177f654d8f81df859675be528f37dcc2ce6f39a49c7a5dd9 for registry.gitlab.com/gitlab-org/gitlab-build-images/debian-bullseye-ruby-3.0.patched-golang-1.19-rust-1.65-node-18.16-postgresql-13:rubygems-3.4-git-2.36-lfs-2.9-chrome-113-yarn-1.22-graphicsmagick-1.3.36 with digest registry.gitlab.com/gitlab-org/gitlab-build-images/debian-bullseye-ruby-3.0.patched-golang-1.19-rust-1.65-node-18.16-postgresql-13@sha256:25367d41b1034f1ecacfc9cb8eebc70cb30c6fdade3781cf295488255bf61614 ...34Running on runner-nl4gfobe-project-278964-concurrent-0 via runner-nl4gfobe-shared-gitlab-org-1685686947-e3808aa9...36Fetching changes with git depth set to 20...37Initialized empty Git repository in /builds/gitlab-org/gitlab/.git/38Created fresh repository.39remote: Enumerating objects: 139970, done. 40remote: Counting objects: 100% (139970/139970), done. 41remote: Compressing objects: 100% (94736/94736), done. 42remote: Total 139970 (delta 61223), reused 92035 (delta 39894), pack-reused 0 43Receiving objects: 100% (139970/139970), 123.36 MiB | 30.63 MiB/s, done.44Resolving deltas: 100% (61223/61223), done.46 * [new ref] refs/pipelines/887306308 -> refs/pipelines/88730630847Checking out 95754c79 as detached HEAD (ref is refs/merge-requests/122015/merge)...48Skipping Git submodules setup49$ git remote set-url origin "${CI_REPOSITORY_URL}"51Checking cache for ruby-gems-debian-bullseye-ruby-3.0-16...52Downloading cache.zip from https://storage.googleapis.com/gitlab-com-runners-cache/project/278964/ruby-gems-debian-bullseye-ruby-3.0-16 53Successfully extracted cache55Downloading artifacts for compile-test-assets (4400964016)...56Downloading artifacts from coordinator... ok host=storage.googleapis.com id=4400964016 responseStatus=200 OK token=64_Dwg5s57Downloading artifacts for detect-tests (4400964025)...58Downloading artifacts from coordinator... ok host=storage.googleapis.com id=4400964025 responseStatus=200 OK token=64_Dwg5s59Downloading artifacts for retrieve-tests-metadata (4400964028)...60Downloading artifacts from coordinator... ok host=storage.googleapis.com id=4400964028 responseStatus=200 OK token=64_Dwg5s61Downloading artifacts for setup-test-env (4400964019)...62Downloading artifacts from coordinator... ok host=storage.googleapis.com id=4400964019 responseStatus=200 OK token=64_Dwg5s64Using docker image sha256:61b59025d0d646cd177f654d8f81df859675be528f37dcc2ce6f39a49c7a5dd9 for registry.gitlab.com/gitlab-org/gitlab-build-images/debian-bullseye-ruby-3.0.patched-golang-1.19-rust-1.65-node-18.16-postgresql-13:rubygems-3.4-git-2.36-lfs-2.9-chrome-113-yarn-1.22-graphicsmagick-1.3.36 with digest registry.gitlab.com/gitlab-org/gitlab-build-images/debian-bullseye-ruby-3.0.patched-golang-1.19-rust-1.65-node-18.16-postgresql-13@sha256:25367d41b1034f1ecacfc9cb8eebc70cb30c6fdade3781cf295488255bf61614 ...65$ echo $FOSS_ONLY66$ [ "$FOSS_ONLY" = "1" ] && rm -rf ee/ qa/spec/ee/ qa/qa/specs/features/ee/ qa/qa/ee/ qa/qa/ee.rb67$ export GOPATH=$CI_PROJECT_DIR/.go68$ mkdir -p $GOPATH69$ source scripts/utils.sh70$ source scripts/prepare_build.sh725Using decomposed database config (config/database.yml.postgresql)726Geo DB will be set up.727Embedding DB will be set up.747$ source ./scripts/rspec_helpers.sh748$ run_timed_command "gem install knapsack --no-document"749$ gem install knapsack --no-document750Successfully installed knapsack-4.0.07511 gem installed752$ echo -e "\e[0Ksection_start:`date +%s`:gitaly-test-spawn[collapsed=true]\r\e[0KStarting Gitaly"753==> 'gem install knapsack --no-document' succeeded in 1 seconds.755$ section_start "gitaly-test-spawn" "Spawning Gitaly"; scripts/gitaly-test-spawn; section_end "gitaly-test-spawn"760$ echo -e "\e[0Ksection_end:`date +%s`:gitaly-test-spawn\r\e[0K"761$ rspec_paralellized_job "--tag ~quarantine --tag ~zoekt"762SKIP_FLAKY_TESTS_AUTOMATICALLY: 763RETRY_FAILED_TESTS_IN_NEW_PROCESS: true764KNAPSACK_GENERATE_REPORT: true765FLAKY_RSPEC_GENERATE_REPORT: true766KNAPSACK_TEST_FILE_PATTERN: {ee/}spec/{migrations}{,/**/}*_spec.rb767KNAPSACK_LOG_LEVEL: debug768KNAPSACK_REPORT_PATH: knapsack/rspec-ee_migration_pg13_single-db-ci-connection_2_2_report.json769FLAKY_RSPEC_SUITE_REPORT_PATH: rspec/flaky/report-suite.json770FLAKY_RSPEC_REPORT_PATH: rspec/flaky/all_rspec-ee_migration_pg13_single-db-ci-connection_2_2_report.json771NEW_FLAKY_RSPEC_REPORT_PATH: rspec/flaky/new_rspec-ee_migration_pg13_single-db-ci-connection_2_2_report.json772SKIPPED_TESTS_REPORT_PATH: rspec/skipped_tests_rspec-ee_migration_pg13_single-db-ci-connection_2_2.txt773CRYSTALBALL: 774RSPEC_TESTS_MAPPING_ENABLED: 775RSPEC_TESTS_FILTER_FILE: 776Knapsack report generator started!777warning: parser/current is loading parser/ruby30, which recognizes 3.0.5-compliant syntax, but you are running 3.0.6.779Run options: exclude {:quarantine=>true, :zoekt=>true}780Test environment set up in 0.500745983 seconds781UpdateCanCreateGroupApplicationSetting782 # order random783 when the setting currently is set to `true` in the configuration file784 behaves like runs the migration successfully785main: == [advisory_lock_connection] object_id: 7191440, pg_backend_pid: 170786main: == 20220901092853 UpdateCanCreateGroupApplicationSetting: migrating ===========787main: -- execute("UPDATE application_settings SET can_create_group = true")788main: -> 0.0025s789main: == 20220901092853 UpdateCanCreateGroupApplicationSetting: migrated (0.0034s) ==790main: == [advisory_lock_connection] object_id: 7191440, pg_backend_pid: 170791 runs the migration successfully792 when the setting is not present in the configuration file793 behaves like runs the migration successfully794main: == [advisory_lock_connection] object_id: 7582760, pg_backend_pid: 174795main: == 20220901092853 UpdateCanCreateGroupApplicationSetting: migrating ===========796main: -- execute("UPDATE application_settings SET can_create_group = true")797main: -> 0.0025s798main: == 20220901092853 UpdateCanCreateGroupApplicationSetting: migrated (0.0031s) ==799main: == [advisory_lock_connection] object_id: 7582760, pg_backend_pid: 174800 runs the migration successfully801 when the setting currently is set to a non-boolean value in the configuration file802 behaves like runs the migration successfully803main: == [advisory_lock_connection] object_id: 8016000, pg_backend_pid: 177804main: == 20220901092853 UpdateCanCreateGroupApplicationSetting: migrating ===========805main: -- execute("UPDATE application_settings SET can_create_group = true")806main: -> 0.0041s807main: == 20220901092853 UpdateCanCreateGroupApplicationSetting: migrated (0.0050s) ==808main: == [advisory_lock_connection] object_id: 8016000, pg_backend_pid: 177809 runs the migration successfully810 when the setting currently is set to `false` in the configuration file811 behaves like runs the migration successfully812main: == [advisory_lock_connection] object_id: 8472080, pg_backend_pid: 180813main: == 20220901092853 UpdateCanCreateGroupApplicationSetting: migrating ===========814main: -- execute("UPDATE application_settings SET can_create_group = false")815main: -> 0.0025s816main: == 20220901092853 UpdateCanCreateGroupApplicationSetting: migrated (0.0031s) ==817main: == [advisory_lock_connection] object_id: 8472080, pg_backend_pid: 180818 runs the migration successfully819 when the setting currently is set to `nil` in the configuration file820 behaves like runs the migration successfully821main: == [advisory_lock_connection] object_id: 8659340, pg_backend_pid: 183822main: == 20220901092853 UpdateCanCreateGroupApplicationSetting: migrating ===========823main: -- execute("UPDATE application_settings SET can_create_group = true")824main: -> 0.0025s825main: == 20220901092853 UpdateCanCreateGroupApplicationSetting: migrated (0.0031s) ==826main: == [advisory_lock_connection] object_id: 8659340, pg_backend_pid: 183827 runs the migration successfully828CleanupOrphansApprovalProjectRules829main: == [advisory_lock_connection] object_id: 29203320, pg_backend_pid: 214830main: == 20220411173544 CleanupOrphansApprovalProjectRules: migrating ===============831main: == 20220411173544 CleanupOrphansApprovalProjectRules: migrated (0.0530s) ======832main: == [advisory_lock_connection] object_id: 29203320, pg_backend_pid: 214833 deletes only scan_finding rule from orphan project834 with an existing security orchestration project835main: == [advisory_lock_connection] object_id: 29649380, pg_backend_pid: 216836main: == 20220411173544 CleanupOrphansApprovalProjectRules: migrating ===============837main: == 20220411173544 CleanupOrphansApprovalProjectRules: migrated (0.0103s) ======838main: == [advisory_lock_connection] object_id: 29649380, pg_backend_pid: 216839 does not delete scan_finding rules840ScheduleDeleteInvalidEpicIssuesRevised841 # order random842 #down843main: == [advisory_lock_connection] object_id: 54306840, pg_backend_pid: 246844main: == 20220128103042 ScheduleDeleteInvalidEpicIssuesRevised: migrating ===========845main: == 20220128103042 ScheduleDeleteInvalidEpicIssuesRevised: migrated (0.0512s) ==846main: == [advisory_lock_connection] object_id: 54306840, pg_backend_pid: 246847 deletes all batched migration records848 #up849main: == [advisory_lock_connection] object_id: 54462640, pg_backend_pid: 249850main: == 20220128103042 ScheduleDeleteInvalidEpicIssuesRevised: migrating ===========851main: == 20220128103042 ScheduleDeleteInvalidEpicIssuesRevised: migrated (0.0474s) ==852main: == [advisory_lock_connection] object_id: 54462640, pg_backend_pid: 249853 schedules background jobs for each batch of epics854AsyncBuildTraceExpireAtIndex855 # order random856 #up857main: == [advisory_lock_connection] object_id: 79139120, pg_backend_pid: 280858main: == 20220224000000 AsyncBuildTraceExpireAtIndex: migrating =====================859main: == 20220224000000 AsyncBuildTraceExpireAtIndex: migrated (0.0003s) ============860main: == [advisory_lock_connection] object_id: 79139120, pg_backend_pid: 280861 sets up a delayed concurrent index creation862 #down863 removes an index864AddIdColumnToPackageMetadataJoinTable865 # order random866 when table is up to date867main: == [advisory_lock_connection] object_id: 97261700, pg_backend_pid: 311868main: == 20230127155217 AddIdColumnToPackageMetadataJoinTable: migrating ============869main: -- quote_table_name(:pm_package_version_licenses)870main: -> 0.0000s871main: -- quote_column_name(:pm_package_version_licenses_pkey)872main: -> 0.0000s873main: -- execute("ALTER TABLE \"pm_package_version_licenses\" DROP CONSTRAINT \"pm_package_version_licenses_pkey\" CASCADE\n")874main: -> 0.0018s875main: -- add_column(:pm_package_version_licenses, :id, :primary_key)876main: -> 0.0034s877main: -- view_exists?(:postgres_partitions)878main: -> 0.0016s879main: -- index_exists?(:pm_package_version_licenses, [:pm_package_version_id, :pm_license_id], {:unique=>true, :name=>:i_pm_package_version_licenses_join_ids, :algorithm=>:concurrently})880main: -> 0.0048s881main: -- add_index(:pm_package_version_licenses, [:pm_package_version_id, :pm_license_id], {:unique=>true, :name=>:i_pm_package_version_licenses_join_ids, :algorithm=>:concurrently})882main: -> 0.0018s883main: == 20230127155217 AddIdColumnToPackageMetadataJoinTable: migrated (0.0475s) ===884main: == [advisory_lock_connection] object_id: 97261700, pg_backend_pid: 311885 updates the primary key of the table886 when table is still partitioned887main: == [advisory_lock_connection] object_id: 97642720, pg_backend_pid: 313888main: == 20230127155217 AddIdColumnToPackageMetadataJoinTable: migrating ============889main: -- drop_table(:pm_package_version_licenses, {:force=>:cascade})890main: -> 0.0037s891main: -- drop_table(:pm_package_versions, {:force=>:cascade})892main: -> 0.0024s893main: -- drop_table(:pm_packages, {:force=>:cascade})894main: -> 0.0023s895main: -- create_table(:pm_packages)896main: -- quote_column_name(:name)897main: -> 0.0000s898main: -> 0.0050s899main: -- create_table(:pm_package_versions)900main: -- quote_column_name(:version)901main: -> 0.0000s902main: -> 0.0062s903main: -- create_table(:pm_package_version_licenses, {:primary_key=>[:pm_package_version_id, :pm_license_id]})904main: -> 0.0052s905main: -- quote_table_name(:pm_package_version_licenses)906main: -> 0.0000s907main: -- quote_column_name(:pm_package_version_licenses_pkey)908main: -> 0.0000s909main: -- execute("ALTER TABLE \"pm_package_version_licenses\" DROP CONSTRAINT \"pm_package_version_licenses_pkey\" CASCADE\n")910main: -> 0.0012s911main: -- add_column(:pm_package_version_licenses, :id, :primary_key)912main: -> 0.0028s913main: -- view_exists?(:postgres_partitions)914main: -> 0.0019s915main: -- index_exists?(:pm_package_version_licenses, [:pm_package_version_id, :pm_license_id], {:unique=>true, :name=>:i_pm_package_version_licenses_join_ids, :algorithm=>:concurrently})916main: -> 0.0049s917main: -- add_index(:pm_package_version_licenses, [:pm_package_version_id, :pm_license_id], {:unique=>true, :name=>:i_pm_package_version_licenses_join_ids, :algorithm=>:concurrently})918main: -> 0.0016s919main: == 20230127155217 AddIdColumnToPackageMetadataJoinTable: migrated (0.0715s) ===920main: == [advisory_lock_connection] object_id: 97642720, pg_backend_pid: 313921 unpartitions the table922main: == [advisory_lock_connection] object_id: 98097560, pg_backend_pid: 315923main: == 20230127155217 AddIdColumnToPackageMetadataJoinTable: migrating ============924main: -- drop_table(:pm_package_version_licenses, {:force=>:cascade})925main: -> 0.0035s926main: -- drop_table(:pm_package_versions, {:force=>:cascade})927main: -> 0.0022s928main: -- drop_table(:pm_packages, {:force=>:cascade})929main: -> 0.0021s930main: -- create_table(:pm_packages)931main: -- quote_column_name(:name)932main: -> 0.0000s933main: -> 0.0065s934main: -- create_table(:pm_package_versions)935main: -- quote_column_name(:version)936main: -> 0.0000s937main: -> 0.0084s938main: -- create_table(:pm_package_version_licenses, {:primary_key=>[:pm_package_version_id, :pm_license_id]})939main: -> 0.0062s940main: -- quote_table_name(:pm_package_version_licenses)941main: -> 0.0000s942main: -- quote_column_name(:pm_package_version_licenses_pkey)943main: -> 0.0000s944main: -- execute("ALTER TABLE \"pm_package_version_licenses\" DROP CONSTRAINT \"pm_package_version_licenses_pkey\" CASCADE\n")945main: -> 0.0015s946main: -- add_column(:pm_package_version_licenses, :id, :primary_key)947main: -> 0.0034s948main: -- view_exists?(:postgres_partitions)949main: -> 0.0021s950main: -- index_exists?(:pm_package_version_licenses, [:pm_package_version_id, :pm_license_id], {:unique=>true, :name=>:i_pm_package_version_licenses_join_ids, :algorithm=>:concurrently})951main: -> 0.0054s952main: -- add_index(:pm_package_version_licenses, [:pm_package_version_id, :pm_license_id], {:unique=>true, :name=>:i_pm_package_version_licenses_join_ids, :algorithm=>:concurrently})953main: -> 0.0022s954main: == 20230127155217 AddIdColumnToPackageMetadataJoinTable: migrated (0.0791s) ===955main: == [advisory_lock_connection] object_id: 98097560, pg_backend_pid: 315956 updates the primary key of the table957FixApprovalProjectRulesWithoutProtectedBranches958 # order random959 #up960main: == [advisory_lock_connection] object_id: 109076040, pg_backend_pid: 343961main: == 20221130192239 FixApprovalProjectRulesWithoutProtectedBranches: migrating ==962main: == 20221130192239 FixApprovalProjectRulesWithoutProtectedBranches: migrated (0.0506s) 963main: == [advisory_lock_connection] object_id: 109076040, pg_backend_pid: 343964 schedules background migration for project approval rules965BackfillNamespaceLdapSettings966 # order random967 #down968 does not schedule background migration969 #up970main: == [advisory_lock_connection] object_id: 121508100, pg_backend_pid: 373971main: == 20230113201308 BackfillNamespaceLdapSettings: migrating ====================972main: == 20230113201308 BackfillNamespaceLdapSettings: migrated (0.0566s) ===========973main: == [advisory_lock_connection] object_id: 121508100, pg_backend_pid: 373974 schedules background migration975The application_settings (main) table has 1261 columns.976Recreating the database977Dropped database 'gitlabhq_test'978Dropped database 'gitlabhq_geo_test'979Dropped database 'gitlabhq_embedding_test'980Created database 'gitlabhq_test'981Created database 'gitlabhq_geo_test'982Created database 'gitlabhq_embedding_test'983main: == [advisory_lock_connection] object_id: 121770720, pg_backend_pid: 388984main: == [advisory_lock_connection] object_id: 121770720, pg_backend_pid: 388985ci: == [advisory_lock_connection] object_id: 121863740, pg_backend_pid: 390986ci: == [advisory_lock_connection] object_id: 121863740, pg_backend_pid: 390987embedding: == [advisory_lock_connection] object_id: 121870620, pg_backend_pid: 392988embedding: == [advisory_lock_connection] object_id: 121870620, pg_backend_pid: 392989geo: == [advisory_lock_connection] object_id: 121878080, pg_backend_pid: 394990geo: == [advisory_lock_connection] object_id: 121878080, pg_backend_pid: 394991ci: == [advisory_lock_connection] object_id: 121955500, pg_backend_pid: 396992ci: == [advisory_lock_connection] object_id: 121955500, pg_backend_pid: 396993Databases re-creation done in 6.434500712000045994SyncSecurityPolicyRuleSchedulesThatMayHaveBeenDeletedByABug995 # order random996 #up997main: == [advisory_lock_connection] object_id: 124572240, pg_backend_pid: 403998main: == 20230310213308 SyncSecurityPolicyRuleSchedulesThatMayHaveBeenDeletedByABug: migrating 999main: == 20230310213308 SyncSecurityPolicyRuleSchedulesThatMayHaveBeenDeletedByABug: migrated (0.0095s) 1000main: == [advisory_lock_connection] object_id: 124572240, pg_backend_pid: 4031001 bulk enqueues one SyncScanPoliciesWorker for each unique policy configuration id1002MigrateCiJobArtifactsToSeparateRegistry1003 #up1004geo: == [advisory_lock_connection] object_id: 128957040, pg_backend_pid: 4331005== 20180322062741 MigrateCiJobArtifactsToSeparateRegistry: migrating ==========1006== 20180322062741 MigrateCiJobArtifactsToSeparateRegistry: migrated (0.0228s) =1007geo: == [advisory_lock_connection] object_id: 128957040, pg_backend_pid: 4331008 migrates all job artifacts to its own data table1009geo: == [advisory_lock_connection] object_id: 129024220, pg_backend_pid: 4391010== 20180322062741 MigrateCiJobArtifactsToSeparateRegistry: migrating ==========1011== 20180322062741 MigrateCiJobArtifactsToSeparateRegistry: migrated (0.0262s) =1012geo: == [advisory_lock_connection] object_id: 129024220, pg_backend_pid: 4391013 creates a new artifact with the trigger1014geo: == [advisory_lock_connection] object_id: 129432820, pg_backend_pid: 4461015== 20180322062741 MigrateCiJobArtifactsToSeparateRegistry: migrating ==========1016== 20180322062741 MigrateCiJobArtifactsToSeparateRegistry: migrated (0.0273s) =1017geo: == [advisory_lock_connection] object_id: 129432820, pg_backend_pid: 4461018 updates a new artifact with the trigger1019geo: == [advisory_lock_connection] object_id: 129850820, pg_backend_pid: 4521020== 20180322062741 MigrateCiJobArtifactsToSeparateRegistry: migrating ==========1021== 20180322062741 MigrateCiJobArtifactsToSeparateRegistry: migrated (0.0268s) =1022geo: == [advisory_lock_connection] object_id: 129850820, pg_backend_pid: 4521023 creates a new artifact using the next ID1024 #down1025geo: == [advisory_lock_connection] object_id: 130271320, pg_backend_pid: 4581026== 20180322062741 MigrateCiJobArtifactsToSeparateRegistry: migrating ==========1027== 20180322062741 MigrateCiJobArtifactsToSeparateRegistry: migrated (0.0262s) =1028geo: == [advisory_lock_connection] object_id: 130271320, pg_backend_pid: 4581029 rolls back data properly1030MigrateLfsObjectsToSeparateRegistry1031 #up1032geo: == [advisory_lock_connection] object_id: 131205060, pg_backend_pid: 4991033== 20191010204941 MigrateLfsObjectsToSeparateRegistry: migrating ==============1034-- execute("LOCK TABLE file_registry IN EXCLUSIVE MODE")1035 -> 0.0020s1036-- execute("INSERT INTO lfs_object_registry (created_at, retry_at, lfs_object_id, bytes, retry_count, missing_on_primary, success, sha256)\nSELECT created_at, retry_at, file_id, bytes, retry_count, missing_on_primary, success, sha256::bytea\nFROM file_registry WHERE file_type = 'lfs'\n")1037 -> 0.0027s1038-- execute("CREATE OR REPLACE FUNCTION replicate_lfs_object_registry()\nRETURNS trigger AS\n$BODY$\nBEGIN\n IF (TG_OP = 'UPDATE') THEN\n UPDATE lfs_object_registry\n SET (retry_at, bytes, retry_count, missing_on_primary, success, sha256) =\n (NEW.retry_at, NEW.bytes, NEW.retry_count, NEW.missing_on_primary, NEW.success, NEW.sha256::bytea)\n WHERE lfs_object_id = NEW.file_id;\n ELSEIF (TG_OP = 'INSERT') THEN\n INSERT INTO lfs_object_registry (created_at, retry_at, lfs_object_id, bytes, retry_count, missing_on_primary, success, sha256)\n VALUES (NEW.created_at, NEW.retry_at, NEW.file_id, NEW.bytes, NEW.retry_count, NEW.missing_on_primary, NEW.success, NEW.sha256::bytea);\nEND IF;\nRETURN NEW;\nEND;\n$BODY$\nLANGUAGE 'plpgsql'\nVOLATILE;\n")1039 -> 0.0019s1040-- execute("CREATE TRIGGER replicate_lfs_object_registry\nAFTER INSERT OR UPDATE ON file_registry\nFOR EACH ROW WHEN (NEW.file_type = 'lfs') EXECUTE PROCEDURE replicate_lfs_object_registry();\n")1041 -> 0.0012s1042== 20191010204941 MigrateLfsObjectsToSeparateRegistry: migrated (0.0187s) =====1043geo: == [advisory_lock_connection] object_id: 131205060, pg_backend_pid: 4991044 migrates all file registries for LFS objects to its own data table1045geo: == [advisory_lock_connection] object_id: 131272280, pg_backend_pid: 5051046== 20191010204941 MigrateLfsObjectsToSeparateRegistry: migrating ==============1047-- execute("LOCK TABLE file_registry IN EXCLUSIVE MODE")1048 -> 0.0019s1049-- execute("INSERT INTO lfs_object_registry (created_at, retry_at, lfs_object_id, bytes, retry_count, missing_on_primary, success, sha256)\nSELECT created_at, retry_at, file_id, bytes, retry_count, missing_on_primary, success, sha256::bytea\nFROM file_registry WHERE file_type = 'lfs'\n")1050 -> 0.0027s1051-- execute("CREATE OR REPLACE FUNCTION replicate_lfs_object_registry()\nRETURNS trigger AS\n$BODY$\nBEGIN\n IF (TG_OP = 'UPDATE') THEN\n UPDATE lfs_object_registry\n SET (retry_at, bytes, retry_count, missing_on_primary, success, sha256) =\n (NEW.retry_at, NEW.bytes, NEW.retry_count, NEW.missing_on_primary, NEW.success, NEW.sha256::bytea)\n WHERE lfs_object_id = NEW.file_id;\n ELSEIF (TG_OP = 'INSERT') THEN\n INSERT INTO lfs_object_registry (created_at, retry_at, lfs_object_id, bytes, retry_count, missing_on_primary, success, sha256)\n VALUES (NEW.created_at, NEW.retry_at, NEW.file_id, NEW.bytes, NEW.retry_count, NEW.missing_on_primary, NEW.success, NEW.sha256::bytea);\nEND IF;\nRETURN NEW;\nEND;\n$BODY$\nLANGUAGE 'plpgsql'\nVOLATILE;\n")1052 -> 0.0020s1053-- execute("CREATE TRIGGER replicate_lfs_object_registry\nAFTER INSERT OR UPDATE ON file_registry\nFOR EACH ROW WHEN (NEW.file_type = 'lfs') EXECUTE PROCEDURE replicate_lfs_object_registry();\n")1054 -> 0.0012s1055== 20191010204941 MigrateLfsObjectsToSeparateRegistry: migrated (0.0184s) =====1056geo: == [advisory_lock_connection] object_id: 131272280, pg_backend_pid: 5051057 creates a new lfs object registry with the trigger1058geo: == [advisory_lock_connection] object_id: 131692020, pg_backend_pid: 5111059== 20191010204941 MigrateLfsObjectsToSeparateRegistry: migrating ==============1060-- execute("LOCK TABLE file_registry IN EXCLUSIVE MODE")1061 -> 0.0020s1062-- execute("INSERT INTO lfs_object_registry (created_at, retry_at, lfs_object_id, bytes, retry_count, missing_on_primary, success, sha256)\nSELECT created_at, retry_at, file_id, bytes, retry_count, missing_on_primary, success, sha256::bytea\nFROM file_registry WHERE file_type = 'lfs'\n")1063 -> 0.0030s1064-- execute("CREATE OR REPLACE FUNCTION replicate_lfs_object_registry()\nRETURNS trigger AS\n$BODY$\nBEGIN\n IF (TG_OP = 'UPDATE') THEN\n UPDATE lfs_object_registry\n SET (retry_at, bytes, retry_count, missing_on_primary, success, sha256) =\n (NEW.retry_at, NEW.bytes, NEW.retry_count, NEW.missing_on_primary, NEW.success, NEW.sha256::bytea)\n WHERE lfs_object_id = NEW.file_id;\n ELSEIF (TG_OP = 'INSERT') THEN\n INSERT INTO lfs_object_registry (created_at, retry_at, lfs_object_id, bytes, retry_count, missing_on_primary, success, sha256)\n VALUES (NEW.created_at, NEW.retry_at, NEW.file_id, NEW.bytes, NEW.retry_count, NEW.missing_on_primary, NEW.success, NEW.sha256::bytea);\nEND IF;\nRETURN NEW;\nEND;\n$BODY$\nLANGUAGE 'plpgsql'\nVOLATILE;\n")1065 -> 0.0022s1066-- execute("CREATE TRIGGER replicate_lfs_object_registry\nAFTER INSERT OR UPDATE ON file_registry\nFOR EACH ROW WHEN (NEW.file_type = 'lfs') EXECUTE PROCEDURE replicate_lfs_object_registry();\n")1067 -> 0.0014s1068== 20191010204941 MigrateLfsObjectsToSeparateRegistry: migrated (0.0197s) =====1069geo: == [advisory_lock_connection] object_id: 131692020, pg_backend_pid: 5111070 updates a new lfs object with the trigger1071geo: == [advisory_lock_connection] object_id: 132109600, pg_backend_pid: 5171072== 20191010204941 MigrateLfsObjectsToSeparateRegistry: migrating ==============1073-- execute("LOCK TABLE file_registry IN EXCLUSIVE MODE")1074 -> 0.0019s1075-- execute("INSERT INTO lfs_object_registry (created_at, retry_at, lfs_object_id, bytes, retry_count, missing_on_primary, success, sha256)\nSELECT created_at, retry_at, file_id, bytes, retry_count, missing_on_primary, success, sha256::bytea\nFROM file_registry WHERE file_type = 'lfs'\n")1076 -> 0.0024s1077-- execute("CREATE OR REPLACE FUNCTION replicate_lfs_object_registry()\nRETURNS trigger AS\n$BODY$\nBEGIN\n IF (TG_OP = 'UPDATE') THEN\n UPDATE lfs_object_registry\n SET (retry_at, bytes, retry_count, missing_on_primary, success, sha256) =\n (NEW.retry_at, NEW.bytes, NEW.retry_count, NEW.missing_on_primary, NEW.success, NEW.sha256::bytea)\n WHERE lfs_object_id = NEW.file_id;\n ELSEIF (TG_OP = 'INSERT') THEN\n INSERT INTO lfs_object_registry (created_at, retry_at, lfs_object_id, bytes, retry_count, missing_on_primary, success, sha256)\n VALUES (NEW.created_at, NEW.retry_at, NEW.file_id, NEW.bytes, NEW.retry_count, NEW.missing_on_primary, NEW.success, NEW.sha256::bytea);\nEND IF;\nRETURN NEW;\nEND;\n$BODY$\nLANGUAGE 'plpgsql'\nVOLATILE;\n")1078 -> 0.0020s1079-- execute("CREATE TRIGGER replicate_lfs_object_registry\nAFTER INSERT OR UPDATE ON file_registry\nFOR EACH ROW WHEN (NEW.file_type = 'lfs') EXECUTE PROCEDURE replicate_lfs_object_registry();\n")1080 -> 0.0012s1081== 20191010204941 MigrateLfsObjectsToSeparateRegistry: migrated (0.0175s) =====1082geo: == [advisory_lock_connection] object_id: 132109600, pg_backend_pid: 5171083 creates a new lfs object using the next ID1084 #down1085geo: == [advisory_lock_connection] object_id: 132526300, pg_backend_pid: 5231086== 20191010204941 MigrateLfsObjectsToSeparateRegistry: migrating ==============1087-- execute("LOCK TABLE file_registry IN EXCLUSIVE MODE")1088 -> 0.0018s1089-- execute("INSERT INTO lfs_object_registry (created_at, retry_at, lfs_object_id, bytes, retry_count, missing_on_primary, success, sha256)\nSELECT created_at, retry_at, file_id, bytes, retry_count, missing_on_primary, success, sha256::bytea\nFROM file_registry WHERE file_type = 'lfs'\n")1090 -> 0.0024s1091-- execute("CREATE OR REPLACE FUNCTION replicate_lfs_object_registry()\nRETURNS trigger AS\n$BODY$\nBEGIN\n IF (TG_OP = 'UPDATE') THEN\n UPDATE lfs_object_registry\n SET (retry_at, bytes, retry_count, missing_on_primary, success, sha256) =\n (NEW.retry_at, NEW.bytes, NEW.retry_count, NEW.missing_on_primary, NEW.success, NEW.sha256::bytea)\n WHERE lfs_object_id = NEW.file_id;\n ELSEIF (TG_OP = 'INSERT') THEN\n INSERT INTO lfs_object_registry (created_at, retry_at, lfs_object_id, bytes, retry_count, missing_on_primary, success, sha256)\n VALUES (NEW.created_at, NEW.retry_at, NEW.file_id, NEW.bytes, NEW.retry_count, NEW.missing_on_primary, NEW.success, NEW.sha256::bytea);\nEND IF;\nRETURN NEW;\nEND;\n$BODY$\nLANGUAGE 'plpgsql'\nVOLATILE;\n")1092 -> 0.0020s1093-- execute("CREATE TRIGGER replicate_lfs_object_registry\nAFTER INSERT OR UPDATE ON file_registry\nFOR EACH ROW WHEN (NEW.file_type = 'lfs') EXECUTE PROCEDURE replicate_lfs_object_registry();\n")1094 -> 0.0014s1095== 20191010204941 MigrateLfsObjectsToSeparateRegistry: migrated (0.0172s) =====1096geo: == [advisory_lock_connection] object_id: 132526300, pg_backend_pid: 5231097 rolls back data properly1098Knapsack report was generated. Preview:1099{1100 "ee/spec/migrations/update_can_create_group_application_setting_spec.rb": 54.67271028500005,1101 "ee/spec/migrations/20220411173544_cleanup_orphans_approval_project_rules_spec.rb": 43.62545811399997,1102 "ee/spec/migrations/schedule_delete_invalid_epic_issues_revised_spec.rb": 46.000359977000016,1103 "ee/spec/migrations/async_build_trace_expire_at_index_spec.rb": 46.81068309799991,1104 "ee/spec/migrations/20230127155217_add_id_column_to_package_metadata_join_table_spec.rb": 30.606416373000002,1105 "ee/spec/migrations/20221130192239_fix_approval_project_rules_without_protected_branches_spec.rb": 30.62142455000003,1106 "ee/spec/migrations/20230113201308_backfill_namespace_ldap_settings_spec.rb": 29.353513514999918,1107 "ee/spec/migrations/20230310213308_sync_security_policy_rule_schedules_that_may_have_been_deleted_by_a_bug_spec.rb": 19.27944900099999,1108 "ee/spec/migrations/geo/migrate_ci_job_artifacts_to_separate_registry_spec.rb": 12.928486810000095,1109 "ee/spec/migrations/geo/migrate_lfs_objects_to_separate_registry_spec.rb": 12.4587785610001451110}1111Knapsack global time execution for tests: 05m 26s1112Finished in 12 minutes 1 second (files took 1 minute 23.08 seconds to load)111328 examples, 0 failures1114Randomized with seed 571541115[TEST PROF INFO] Time spent in factories: 00:00.092 (0.01% of total time)1116RSpec exited with 0.1117No examples to retry, congrats!1119Not uploading cache ruby-gems-debian-bullseye-ruby-3.0-16 due to policy1121Uploading artifacts...1122coverage/: found 5 matching artifact files and directories 1123crystalball/: found 2 matching artifact files and directories 1124WARNING: deprecations/: no matching files. Ensure that the artifact path is relative to the working directory (/builds/gitlab-org/gitlab) 1125knapsack/: found 4 matching artifact files and directories 1126WARNING: query_recorder/: no matching files. Ensure that the artifact path is relative to the working directory (/builds/gitlab-org/gitlab) 1127rspec/: found 16 matching artifact files and directories 1128WARNING: tmp/capybara/: no matching files. Ensure that the artifact path is relative to the working directory (/builds/gitlab-org/gitlab) 1129log/*.log: found 13 matching artifact files and directories 1130WARNING: Upload request redirected location=https://gitlab.com/api/v4/jobs/4400964797/artifacts?artifact_format=zip&artifact_type=archive&expire_in=31d new-url=https://gitlab.com1131WARNING: Retrying... context=artifacts-uploader error=request redirected1132Uploading artifacts as "archive" to coordinator... 201 Created id=4400964797 responseStatus=201 Created token=64_Dwg5s1133Uploading artifacts...1134rspec/rspec-*.xml: found 1 matching artifact files and directories 1135WARNING: Upload request redirected location=https://gitlab.com/api/v4/jobs/4400964797/artifacts?artifact_format=gzip&artifact_type=junit&expire_in=31d new-url=https://gitlab.com1136WARNING: Retrying... context=artifacts-uploader error=request redirected1137Uploading artifacts as "junit" to coordinator... 201 Created id=4400964797 responseStatus=201 Created token=64_Dwg5s1139Job succeeded