`revert-pg-upgrade` fails to downgrade a Geo secondary’s standalone tracking DB’s PG data
After running `revert-pg-upgrade` on the Geo secondary's standalone tracking DB PG node, the PG data is not downgraded, so the PG server doesn't work: ``` 2020-04-10_00:21:01.27720 FATAL: database files are incompatible with server 2020-04-10_00:21:01.27722 DETAIL: The data directory was initialized by PostgreSQL version 11, which is not compatible with this version 10.12. ``` ### Further consequences The DB seems to be in a fairly difficult to recover state since `pg-upgrade` doesn't work from this point either: ``` root@mkozono-omnibus4838b-secondary-geo:~# gitlab-ctl pg-upgrade --target-version=11 Checking for an omnibus managed postgresql: OK Checking if postgresql['version'] is set: OK Checking if we already upgraded: NOT OK Checking for a newer version of PostgreSQL to install Upgrading PostgreSQL to 11.7 Checking if PostgreSQL bin files are symlinked to the expected location: OK Starting the geo database Waiting 30 seconds to ensure tasks complete before PostgreSQL upgrade. See https://docs.gitlab.com/omnibus/settings/database.html#upgrade-packaged-postgresql-server for details If you do not want to upgrade the PostgreSQL server at this time, enter Ctrl-C and see the documentation for details Please hit Ctrl-C now if you want to cancel the operation. Toggling deploy page:cp /opt/gitlab/embedded/service/gitlab-rails/public/deploy.html /opt/gitlab/embedded/service/gitlab-rails/public/index.html Toggling deploy page: OK Toggling services:ok: down: crond: 0s, normally up ok: down: grafana: 1s, normally up ok: down: logrotate: 0s, normally up Toggling services: OK There was an error fetching locale and encoding information from the database Please ensure the database is running and functional before running pg-upgrade STDOUT: STDERR: psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/opt/gitlab/postgresql/.s.PGSQL.5432"? == Fatal error == Please check error logs == Reverting == Symlink correct version of binaries: OK == Reverted == == Reverted to 10.12. Please check output for what went wrong == Toggling deploy page:rm -f /opt/gitlab/embedded/service/gitlab-rails/public/index.html Toggling deploy page: OK Toggling services:ok: run: crond: (pid 15140) 1s ok: run: grafana: (pid 15148) 0s ok: run: logrotate: (pid 15158) 1s Toggling services: OK ``` Losing the tracking DB data isn't the absolute worst, since you can wipe it and resync everything, but for large instances, that could take a long time, and if this Geo secondary is used for DR purposes, you are unprotected during that time. Setting gitlab~3713902 for that reason, though this is IMO a low priority gitlab~3713902.
issue