Flaky test: Release DB proxy's host when clearing configuration
Fix the flaky test in #325786 (comment 536276808)
When running the database load balancing tests in order, the second test fails.
bundle exec rspec ee/spec/lib/gitlab/database/load_balancing_spec.rb ee/spec/lib/gitlab/database/load_balancing/connection_proxy_spec.rb
$ bundle exec rspec ee/spec/lib/gitlab/database/load_balancing_spec.rb ee/spec/lib/gitlab/database/load_balancing/connection_proxy_spec.rb
All examples were filtered out; ignoring {:focus=>true}
....................................-- create_table(:load_balancing_test, {:force=>true})
-> 0.0084s
.........................-- drop_table(:load_balancing_test, {:force=>true})
-> 0.0025s
.............-- create_table(:connection_proxy_bulk_insert, {:force=>true})
-> 0.0050s
-- drop_table(:connection_proxy_bulk_insert, {:force=>true})
-> 0.0014s
F............
Failures:
1) Gitlab::Database::LoadBalancing::ConnectionProxy.insert_all! inserts data in bulk
Failure/Error:
expect do
model_class.insert_all! [
{ name: "item1" },
{ name: "item2" }
]
end.to change { model_class.count }.by(2)
ActiveRecord::StatementInvalid:
PG::UndefinedTable: ERROR: relation "connection_proxy_bulk_insert" does not exist
LINE 1: SELECT COUNT(*) FROM "connection_proxy_bulk_insert" /*applic...
^
# ./ee/lib/gitlab/database/load_balancing/connection_proxy.rb:90:in `block in read_using_load_balancer'
# ./ee/lib/gitlab/database/load_balancing/load_balancer.rb:40:in `read'
# ./ee/lib/gitlab/database/load_balancing/connection_proxy.rb:89:in `read_using_load_balancer'
# ./ee/lib/gitlab/database/load_balancing/connection_proxy.rb:46:in `select_all'
# ./ee/spec/lib/gitlab/database/load_balancing/connection_proxy_spec.rb:104:in `block (4 levels) in <main>'
# ./ee/spec/lib/gitlab/database/load_balancing/connection_proxy_spec.rb:99:in `block (3 levels) in <main>'
# ./spec/spec_helper.rb:365:in `block (3 levels) in <top (required)>'
# ./spec/support/sidekiq_middleware.rb:9:in `with_sidekiq_server_middleware'
# ./spec/spec_helper.rb:356:in `block (2 levels) in <top (required)>'
# ./spec/spec_helper.rb:350:in `block (2 levels) in <top (required)>'
# ------------------
# --- Caused by: ---
# PG::UndefinedTable:
# ERROR: relation "connection_proxy_bulk_insert" does not exist
# LINE 1: SELECT COUNT(*) FROM "connection_proxy_bulk_insert" /*applic...
# ^
# ./ee/lib/gitlab/database/load_balancing/connection_proxy.rb:90:in `block in read_using_load_balancer'
Finished in 19.73 seconds (files took 46.22 seconds to load)
87 examples, 1 failure
Failed examples:
rspec ./ee/spec/lib/gitlab/database/load_balancing/connection_proxy_spec.rb:90 # Gitlab::Database::LoadBalancing::ConnectionProxy.insert_all! inserts data in bulk
Here are the root cause of the failure:
- The default configuration of the database cleaner is transaction, it means that all SQL queries inside a test are wrapped inside a big transaction, then rollback after a test is done. The transaction is fetched from
ActiveRecord::Base.connection
. The states and results of those queries are invisible to other connections, regardless they connect to the same DB in the same process. - In
ee/spec/lib/gitlab/database/load_balancing_spec.rb
, as it's a integration test, the database load balancer is configured for real, with:
let(:hosts) { [ActiveRecord::Base.configurations["development"]['host']] }
before do
subject.configure_proxy(::Gitlab::Database::LoadBalancing::ConnectionProxy.new(hosts))
end
The load balancer creates a new connection pool for each host. Unfortunately, when picking the host to fetch the connection from, the load balancer saves the host into the request store:
def host
RequestStore[CACHE_KEY] ||= @host_list.next
end
- In the next test (
ee/spec/lib/gitlab/database/load_balancing/connection_proxy_spec.rb
), the load balancer returns a connection from the host generated by the previous test. That connection is different from the connection database_cleaner used to define the test table. Hence, the relation not exist .
The solution is straightforward: clean up and release the host when clearing DB load balancing configuration.