DLE: Release 3.4.0 testing
Release notes:
New features
-
(test 1, 2, 3) Enable Standard Edition - !693 (merged), !723 (merged) -
(test) Add an ability to ignore errors that occurred during logical data dump/restore - !699 (merged), !700 (merged) -
(test) Improved Logs view and added log filters - !702 (merged)
Improvements and fixes
-
(test) Use aurora Docker images for RDS Aurora instances - !668 (merged) -
(test) Polish UI of the Log tab - !672 (merged) -
(test) Allow viewing instance configuration without retrieving jobs and in physical mode - !674 (merged) - (internal) Modify UI pipeline rules - !676 (merged)
-
(test) Collapse/expand for the right-side panel in DLE UI - !677 (merged), !681 (merged) -
(test) Prevent crash UI when project id doesn't exist - !684 (merged) -
(test) Report instanceID in API and CLI output - !686 (merged) -
(test) Enable debug mode by default - !687 (merged) -
(test) Add system stats to the EngineStarted telemetry event - !688 (merged) -
(test) Add hint about Postgres logs - !685 (merged) - (internal,platform) Fix/correct links for Platform UI - !689 (merged)
- (internal,platform) Remove sentry initialization - !690 (merged)
-
(test) Add button to sign out users from UI - !678 (merged) -
(test) Parse timezone of the lastActivity date - !692 (merged) -
(test) Add alerts about refreshing errors to the state of the instance - !696 (merged) -
(test) Mask sensitive data in the DLE logs output - !705 (merged) - (internal,platform) Oauth checkboxes for Platform UI instances - !703 (merged)
-
(test) Improve UI configuration form - !698 (merged) -
(related to MR !674 (merged)) Check in UI if retrieval mode is unknown - !669 (merged) - (internal,platform) Disable token field on edit - !706 (merged), !707 (merged)
-
(test) Clean up data during full data refresh in logical mode. New config option to skip full refresh during startup time: skipStartRefresh - !704 (merged) -
(test) Choose the last snapshot when reset to the latest snapshot is requested - !708 (merged) - (internal,platform) Project label enhancements - !709 (merged), !710 (merged)
-
(test) Get rid of experimental estimator - !716 (merged) -
(test) Create client binaries for Apple silicon - !719 (merged) -
(test) Soft limits for billing - !718 (merged)
Platform UI and SE
Designs
- Show closed items
Activity
-
Newest first Oldest first
-
Show all activity Show comments only Show history only
- Vitaliy Kukharik assigned to @vitabaks
assigned to @vitabaks
- Author Contributor
Test 1. Enable Standard Edition (activate billing)
- DLE image:
registry.gitlab.com/postgres-ai/database-lab/dblab-server:master
- UI image:
registry.gitlab.com/postgres-ai/database-lab/ce-ui:se-billing-ui
Prepare: change
platform.url
grep "^ url:" roles/dle/templates/server.yml.j2 url: "https://v2.postgres.ai/api/general"
Test result: not passed
- Failed to post telemetry request: failed to perform request: unsuccessful status given: 424. Once resolved,
Re-test with new organization
Run Ansible playbook:
docker run --rm -it --env GCP_SERVICE_ACCOUNT_CONTENTS=${GCP_SERVICE_ACCOUNT_CONTENTS} \ postgresai/dle-se-ansible:latest \ ansible-playbook deploy_dle.yml --extra-vars \ "provision='gcp' \ server_name='dle-test' \ server_type='n2-standard-4' \ server_image='projects/ubuntu-os-cloud/global/images/family/ubuntu-2204-lts' \ server_location='europe-west4' \ volume_size='100' dle_verification_token='pDhQFeZIlA6w3jf3o4PwvTncVUpydh5D' \ dle_platform_org_key='L5zs1x9HRnhJ4InEOfFnJeDv' \ ssh_public_keys='ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAgEA0TU9YoE5MwvOK*******whzcMINzKKCc7AVGbk= vitaliy@postgres.ai' \ dle_platform_project_name='dle-test' \ dle_platform_url='https://v2.postgres.ai/api/general' \ dle_ui_image='registry.gitlab.com/postgres-ai/database-lab/ce-ui:501-se-bug-fixes'"
Ansible log:
Click here to expand...
PLAY [Perform pre-checks] ************************************************************************************************************************************************************************************** PLAY [Provision of cloud resources (virtual machine + disk) for DLE (on GCP)] ********************************************************************************************************************************** TASK [Gathering Facts] ***************************************************************************************************************************************************************************************** ok: [localhost] TASK [provision : Generate a new temporary ssh key to access the server when deploying DLE] ******************************************************************************************************************** changed: [localhost] TASK [provision : set_fact: ssh_key_name and ssh_key_content] ************************************************************************************************************************************************** ok: [localhost] TASK [provision : Make sure the python3-pip package are present on controlling host] *************************************************************************************************************************** ok: [localhost -> 127.0.0.1] TASK [provision : Install google-auth dependency on controlling host] ****************************************************************************************************************************************** ok: [localhost -> 127.0.0.1] TASK [provision : GCP: Gather information about project] ******************************************************************************************************************************************************* ok: [localhost] TASK [provision : GCP: Create or modify a instance dle-test] *************************************************************************************************************************************************** changed: [localhost] TASK [provision : Show GCP instance info] ********************************************************************************************************************************************************************** ok: [localhost] => { "msg": [ "instance ID is 7780057935256493633", "instance Name is dle-test", "instance Image is ubuntu-2204-lts", "instance Type is n2-standard-4", "Block Storage Size is 100 gigabytes", "Public IP is 34.141.210.224", "Private IP is 10.164.0.2" ] } TASK [provision : set_fact: dle_host (for deploy DLE)] ********************************************************************************************************************************************************* ok: [localhost] TASK [provision : Wait for the root@34.141.210.224 to be available via ssh] ************************************************************************************************************************************ ok: [localhost] PLAY [add_host to dle_group] *********************************************************************************************************************************************************************************** TASK [Add root@34.141.210.224 to dle_group] ******************************************************************************************************************************************************************** ok: [localhost] PLAY [set_fact for dle_group] ********************************************************************************************************************************************************************************** TASK [set_fact: ansible_ssh_private_key_file] ****************************************************************************************************************************************************************** ok: [root@34.141.210.224] PLAY [Deploy DLE] ********************************************************************************************************************************************************************************************** TASK [Gathering Facts] ***************************************************************************************************************************************************************************************** ok: [root@34.141.210.224] TASK [Include OS-specific variables] *************************************************************************************************************************************************************************** ok: [root@34.141.210.224] TASK [Update apt cache] **************************************************************************************************************************************************************************************** changed: [root@34.141.210.224] TASK [Make sure the gnupg and apt-transport-https packages are present] **************************************************************************************************************************************** changed: [root@34.141.210.224] TASK [add-repository : Add repository apt-key] ***************************************************************************************************************************************************************** changed: [root@34.141.210.224] => (item={'key': 'https://download.docker.com/linux/ubuntu/gpg'}) changed: [root@34.141.210.224] => (item={'key': 'https://www.postgresql.org/media/keys/ACCC4CF8.asc'}) [WARNING]: Module remote_tmp /tmp/root/ansible did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually TASK [add-repository : Add repository] ************************************************************************************************************************************************************************* changed: [root@34.141.210.224] => (item={'repo': 'deb https://apt.postgresql.org/pub/repos/apt/ jammy-pgdg main'}) changed: [root@34.141.210.224] => (item={'repo': 'deb [arch=amd64] https://download.docker.com/linux/ubuntu jammy stable'}) TASK [packages : Install system packages] ********************************************************************************************************************************************************************** ok: [root@34.141.210.224] => (item=apt-transport-https) ok: [root@34.141.210.224] => (item=ca-certificates) changed: [root@34.141.210.224] => (item=gnupg-agent) ok: [root@34.141.210.224] => (item=python3-software-properties) changed: [root@34.141.210.224] => (item=python3-docker) ok: [root@34.141.210.224] => (item=software-properties-common) ok: [root@34.141.210.224] => (item=curl) changed: [root@34.141.210.224] => (item=gnupg2) changed: [root@34.141.210.224] => (item=zfsutils-linux) changed: [root@34.141.210.224] => (item=docker-ce) ok: [root@34.141.210.224] => (item=docker-ce-cli) ok: [root@34.141.210.224] => (item=containerd.io) changed: [root@34.141.210.224] => (item=postgresql-client) changed: [root@34.141.210.224] => (item=s3fs) changed: [root@34.141.210.224] => (item=jq) TASK [zpool : Check if ZFS pool is already exist] ************************************************************************************************************************************************************** ok: [root@34.141.210.224] TASK [zpool : Detect empty volume] ***************************************************************************************************************************************************************************** ok: [root@34.141.210.224] TASK [zpool : Create zpool (use /dev/sdb)] ********************************************************************************************************************************************************************* changed: [root@34.141.210.224] TASK [zpool : Check the number of datasets for the dblab_pool] ************************************************************************************************************************************************* ok: [root@34.141.210.224] TASK [zpool : Create datasets] ********************************************************************************************************************************************************************************* changed: [root@34.141.210.224] => (item=dataset_1) changed: [root@34.141.210.224] => (item=dataset_2) TASK [dle : Create configuration and logs directories] ********************************************************************************************************************************************************* changed: [root@34.141.210.224] => (item=/root/.dblab/engine/configs) changed: [root@34.141.210.224] => (item=/root/.dblab/engine/meta) changed: [root@34.141.210.224] => (item=/root/.dblab/engine/logs) changed: [root@34.141.210.224] => (item=/var/lib/dblab/dblab_pool/dataset_1/dump) TASK [dle : Generate DLE configuration file "/root/.dblab/engine/configs/server.yml"] ************************************************************************************************************************** changed: [root@34.141.210.224] TASK [dle : Create pending.retrieval] ************************************************************************************************************************************************************************** changed: [root@34.141.210.224] TASK [dle : Make sure that the DLE container is running] ******************************************************************************************************************************************************* changed: [root@34.141.210.224] TASK [dle : Wait for DLE port to be available] ***************************************************************************************************************************************************************** ok: [root@34.141.210.224] TASK [dle : Check that DLE is healthy] ************************************************************************************************************************************************************************* ok: [root@34.141.210.224] TASK [dle : Check that DLE UI is available] ******************************************************************************************************************************************************************** ok: [root@34.141.210.224] TASK [cli : Download dblab CLI binary] ************************************************************************************************************************************************************************* changed: [root@34.141.210.224] TASK [cli : Copy dblab CLI file to /usr/local/bin/] ************************************************************************************************************************************************************ changed: [root@34.141.210.224] TASK [cli : Create CLI configuration directory] **************************************************************************************************************************************************************** changed: [root@34.141.210.224] TASK [cli : Generate CLI conf file "/root/.dblab/cli/cli.yml"] ************************************************************************************************************************************************* changed: [root@34.141.210.224] TASK [netdata : Copy DLE plugin configuration for Netdata] ***************************************************************************************************************************************************** changed: [root@34.141.210.224] TASK [netdata : Start Netdata container] *********************************************************************************************************************************************************************** changed: [root@34.141.210.224] TASK [netdata : Check Netdata is running] ********************************************************************************************************************************************************************** ok: [root@34.141.210.224] TASK [authorized-keys : Get the system username] *************************************************************************************************************************************************************** ok: [root@34.141.210.224] TASK [authorized-keys : Add public keys to ~root/.ssh/authorized_keys] ***************************************************************************************************************************************** changed: [root@34.141.210.224] => (item=None) changed: [root@34.141.210.224] TASK [authorized-keys : Remove a temporary ssh public key from the server] ************************************************************************************************************************************* changed: [root@34.141.210.224] TASK [deploy-finish : DLE connection info] ********************************************************************************************************************************************************************* ok: [root@34.141.210.224] => { "msg": [ "DLE Server local URL: http://127.0.0.1:2345", "DLE UI local URL: http://127.0.0.1:2346", "DLE verification token: pDhQFeZIlA6w3jf3o4PwvTncVUpydh5D", "Monitoring (Netdata) local URL: http://127.0.0.1:19999", "DLE public URL: Not available. The proxy server is not installed. Use SSH port forwarding to access DLE.", "SSH-port-forwarding:", " - For DLE API: 'ssh -N -L 2345:127.0.0.1:2345 root@34.141.210.224 -i YOUR_PRIVATE_KEY'", " - For DLE UI: 'ssh -N -L 2346:127.0.0.1:2346 root@34.141.210.224 -i YOUR_PRIVATE_KEY'", " - To monitoring tool (Netdata): 'ssh -N -L 19999:127.0.0.1:19999 root@34.141.210.224 -i YOUR_PRIVATE_KEY'" ] } RUNNING HANDLER [dle : Perform handler tasks] ****************************************************************************************************************************************************************** included: /dle-se-ansible/roles/dle/tasks/restart_dle.yml for root@34.141.210.224 RUNNING HANDLER [dle : Restart DLE container] ****************************************************************************************************************************************************************** changed: [root@34.141.210.224] RUNNING HANDLER [dle : Wait for DLE port to be available] ****************************************************************************************************************************************************** ok: [root@34.141.210.224] RUNNING HANDLER [dle : Check that DLE is healthy] ************************************************************************************************************************************************************** ok: [root@34.141.210.224] RUNNING HANDLER [dle : Check that DLE UI is available] ********************************************************************************************************************************************************* ok: [root@34.141.210.224] PLAY RECAP ***************************************************************************************************************************************************************************************************** localhost : ok=11 changed=2 unreachable=0 failed=0 skipped=76 rescued=0 ignored=0 root@34.141.210.224 : ok=36 changed=20 unreachable=0 failed=0 skipped=19 rescued=0 ignored=0
Result:
- Subscription is active.
DLE UI - click the "re-activate DLE" button
Test result: passed
Edited by Vitaliy Kukharik 1 - DLE image:
Collapse replies - Author Contributor
- Author Contributor
- Author Contributor
- Author Contributor
Try to deploy Platform ui image from !691 (closed)
upd: https://gitlab.com/postgres-ai/database-lab/-/jobs/4132548120
Edited by Vitaliy Kukharik - Author Contributor
My org key is changed
TUzA6HSq0Hyts6lrYnZTseWl
, nowpdNQAHWitjCwgkmhToece5Vp
Edited by Vitaliy Kukharik - Author Contributor
re-try with new org_key
ansible-playbook deploy_dle.yml --extra-vars \ "provision='hetzner' \ server_name='vitaliy-dle-test' \ server_type='CX21' \ server_image='ubuntu-22.04' \ server_location='fsn1' \ volume_size=30 \ dle_verification_token='7LGH7jYt14nthvMhA8BdqtDNxleBQL2t' \ dle_platform_project_name='vitaliy-dle-test' \ dle_platform_org_key='pdNQAHWitjCwgkmhToece5Vp' \ dle_image='registry.gitlab.com/postgres-ai/database-lab/dblab-server:master' \ dle_ui_image='registry.gitlab.com/postgres-ai/database-lab/ce-ui:se-billing-ui'"
result:
- No active payment methods are found for your organization on the Postgres.ai Platform; please, visit the
billing page . Once resolved, re-activate DLE
Test result: not passed
Edited by Vitaliy Kukharik - Author Contributor
Try to activate billing on stripe (manually)
vitabaks@MacBook-Pro-Vitaliy ~ % curl --location --request POST 'https://v2.postgres.ai/api/general/rpc/billing_portal_start_session' \ --header 'Content-Type: application/json' \ --header 'Authorization: Bearer ***********' \ --header 'Accept: application/vnd.pgrst.object+json' \ --data-raw '{ "org_id": 55, "return_url": "https://console-dev.postgres.ai/demo/billing" }' {"res":{"result": "OK", "details": {"content": {"id": "*******", "url": "https://billing.stripe.com/p/session/******", "flow": null, "locale": null, "object": "billing_portal.session", "created": 1681844106, "customer": "****", "livemode": false, "return_url": "https://console-dev.postgres.ai/demo/billing", "on_behalf_of": null, "configuration": "*******"}, "message": "OK", "status_code": 200}}}%
Add PAYMENT METHOD
DLE is now activated
Edited by Vitaliy Kukharik - Author Contributor
Re-test with new organization is passed
- Vitaliy Kukharik mentioned in merge request !723 (merged)
mentioned in merge request !723 (merged)
- Vitaliy Kukharik changed the description
Compare with previous version changed the description
- Author Contributor
Test 2. Enable Standard Edition (active billing)
- DLE image:
registry.gitlab.com/postgres-ai/database-lab/dblab-server:master
- UI image:
registry.gitlab.com/postgres-ai/database-lab/ce-ui:master
Prepare: change
platform.url
grep "^ url:" roles/dle/templates/server.yml.j2 url: "https://v2.postgres.ai/api/general"
Test result: passed (after fixes)
Run Ansible playbook:
export HCLOUD_API_TOKEN=******** ansible-playbook deploy_dle.yml --extra-vars \ "provision='hetzner' \ server_name='vitaliy-dle-test' \ server_type='CCX21' \ server_image='ubuntu-22.04' \ server_location='hel1' \ volume_size='100' dle_verification_token='pZxSg5veeEnItytXbZER0SEButIYvicL' \ dle_platform_org_key='pdNQAHWitjCwgkmhToece5Vp' \ dle_platform_project_name='vitaliy-dle-test' \ dle_image='registry.gitlab.com/postgres-ai/database-lab/dblab-server:master' \ dle_ui_image='registry.gitlab.com/postgres-ai/database-lab/ce-ui:master'"
Ansible log:
Click here to expand...
PLAY [Perform pre-checks] ************************************************************************************************************************************************************************************** PLAY [Provision of cloud resources (virtual machine + disk) for DLE (on HETZNER)] ****************************************************************************************************************************** TASK [Gathering Facts] ***************************************************************************************************************************************************************************************** ok: [localhost] TASK [provision : Generate a new temporary ssh key to access the server when deploying DLE] ******************************************************************************************************************** changed: [localhost] TASK [provision : set_fact: ssh_key_name and ssh_key_content] ************************************************************************************************************************************************** ok: [localhost] TASK [provision : Install hcloud dependency on controlling host] *********************************************************************************************************************************************** ok: [localhost -> 127.0.0.1] TASK [provision : Hetzner Cloud: Add ssh key dle_key_tmp to the cloud] ***************************************************************************************************************************************** ok: [localhost] TASK [provision : Hetzner Cloud: Gather information about SSH key dle_key_tmp] ********************************************************************************************************************************* ok: [localhost] TASK [provision : Hetzner Cloud: set_fact ssh_key_names] ******************************************************************************************************************************************************* ok: [localhost] => (item=None) ok: [localhost] TASK [provision : Hetzner Cloud: Gather information about SSH keys] ******************************************************************************************************************************************** ok: [localhost] TASK [provision : Hetzner Cloud: Get the names of all SSH keys] ************************************************************************************************************************************************ ok: [localhost] => (item=nik) ok: [localhost] => (item=agneum@gmail.com) ok: [localhost] => (item=vitaliy) ok: [localhost] => (item=dmitry) ok: [localhost] => (item=dle_key_tmp) TASK [provision : Hetzner Cloud: Create or modify server vitaliy-dle-test] ************************************************************************************************************************************* changed: [localhost] TASK [provision : Hetzner Cloud: Create or modify volume for server vitaliy-dle-test] ************************************************************************************************************************** changed: [localhost] TASK [provision : Show Server info] **************************************************************************************************************************************************************************** ok: [localhost] => { "server_result.hcloud_server": { "backup_window": null, "datacenter": "hel1-dc2", "delete_protection": false, "id": "31460164", "image": "ubuntu-22.04", "ipv4_address": "95.216.139.61", "ipv6": null, "labels": {}, "location": "hel1", "name": "vitaliy-dle-test", "placement_group": null, "private_networks": [], "rebuild_protection": false, "rescue_enabled": false, "server_type": "ccx21", "status": "running" } } TASK [provision : set_fact: dle_host (for deploy DLE)] ********************************************************************************************************************************************************* ok: [localhost] TASK [provision : Wait for the root@95.216.139.61 to be available via ssh] ************************************************************************************************************************************* ok: [localhost] TASK [provision : Hetzner Cloud: Remove a temporary ssh key dle_key_tmp from the cloud] ************************************************************************************************************************ changed: [localhost] PLAY [add_host to dle_group] *********************************************************************************************************************************************************************************** TASK [Add root@95.216.139.61 to dle_group] ********************************************************************************************************************************************************************* ok: [localhost] PLAY [set_fact for dle_group] ********************************************************************************************************************************************************************************** TASK [set_fact: ansible_ssh_private_key_file] ****************************************************************************************************************************************************************** ok: [root@95.216.139.61] PLAY [Deploy DLE] ********************************************************************************************************************************************************************************************** TASK [Gathering Facts] ***************************************************************************************************************************************************************************************** ok: [root@95.216.139.61] TASK [Include OS-specific variables] *************************************************************************************************************************************************************************** ok: [root@95.216.139.61] TASK [Update apt cache] **************************************************************************************************************************************************************************************** changed: [root@95.216.139.61] TASK [Make sure the gnupg and apt-transport-https packages are present] **************************************************************************************************************************************** changed: [root@95.216.139.61] TASK [add-repository : Add repository apt-key] ***************************************************************************************************************************************************************** changed: [root@95.216.139.61] => (item={'key': 'https://download.docker.com/linux/ubuntu/gpg'}) changed: [root@95.216.139.61] => (item={'key': 'https://www.postgresql.org/media/keys/ACCC4CF8.asc'}) [WARNING]: Module remote_tmp /tmp/root/ansible did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually TASK [add-repository : Add repository] ************************************************************************************************************************************************************************* changed: [root@95.216.139.61] => (item={'repo': 'deb https://apt.postgresql.org/pub/repos/apt/ jammy-pgdg main'}) changed: [root@95.216.139.61] => (item={'repo': 'deb [arch=amd64] https://download.docker.com/linux/ubuntu jammy stable'}) TASK [packages : Install system packages] ********************************************************************************************************************************************************************** ok: [root@95.216.139.61] => (item=apt-transport-https) ok: [root@95.216.139.61] => (item=ca-certificates) changed: [root@95.216.139.61] => (item=gnupg-agent) ok: [root@95.216.139.61] => (item=python3-software-properties) changed: [root@95.216.139.61] => (item=python3-docker) ok: [root@95.216.139.61] => (item=software-properties-common) ok: [root@95.216.139.61] => (item=curl) ok: [root@95.216.139.61] => (item=gnupg2) changed: [root@95.216.139.61] => (item=zfsutils-linux) changed: [root@95.216.139.61] => (item=docker-ce) ok: [root@95.216.139.61] => (item=docker-ce-cli) ok: [root@95.216.139.61] => (item=containerd.io) changed: [root@95.216.139.61] => (item=postgresql-client) changed: [root@95.216.139.61] => (item=s3fs) changed: [root@95.216.139.61] => (item=jq) TASK [zpool : Check if ZFS pool is already exist] ************************************************************************************************************************************************************** ok: [root@95.216.139.61] TASK [zpool : Detect empty volume] ***************************************************************************************************************************************************************************** ok: [root@95.216.139.61] TASK [zpool : Create zpool (use /dev/sdb)] ********************************************************************************************************************************************************************* changed: [root@95.216.139.61] TASK [zpool : Check the number of datasets for the dblab_pool] ************************************************************************************************************************************************* ok: [root@95.216.139.61] TASK [zpool : Create datasets] ********************************************************************************************************************************************************************************* changed: [root@95.216.139.61] => (item=dataset_1) changed: [root@95.216.139.61] => (item=dataset_2) TASK [dle : Create configuration and logs directories] ********************************************************************************************************************************************************* changed: [root@95.216.139.61] => (item=/root/.dblab/engine/configs) changed: [root@95.216.139.61] => (item=/root/.dblab/engine/meta) changed: [root@95.216.139.61] => (item=/root/.dblab/engine/logs) changed: [root@95.216.139.61] => (item=/var/lib/dblab/dblab_pool/dataset_1/dump) TASK [dle : Generate DLE configuration file "/root/.dblab/engine/configs/server.yml"] ************************************************************************************************************************** changed: [root@95.216.139.61] TASK [dle : Create pending.retrieval] ************************************************************************************************************************************************************************** changed: [root@95.216.139.61] TASK [dle : Make sure that the DLE container is running] ******************************************************************************************************************************************************* changed: [root@95.216.139.61] TASK [dle : Wait for DLE port to be available] ***************************************************************************************************************************************************************** ok: [root@95.216.139.61] FAILED - RETRYING: [root@95.216.139.61]: Check that DLE is healthy (10 retries left). TASK [dle : Check that DLE is healthy] ************************************************************************************************************************************************************************* ok: [root@95.216.139.61] TASK [dle : Check that DLE UI is available] ******************************************************************************************************************************************************************** ok: [root@95.216.139.61] TASK [cli : Download dblab CLI binary] ************************************************************************************************************************************************************************* changed: [root@95.216.139.61] TASK [cli : Copy dblab CLI file to /usr/local/bin/] ************************************************************************************************************************************************************ changed: [root@95.216.139.61] TASK [cli : Create CLI configuration directory] **************************************************************************************************************************************************************** changed: [root@95.216.139.61] TASK [cli : Generate CLI conf file "/root/.dblab/cli/cli.yml"] ************************************************************************************************************************************************* changed: [root@95.216.139.61] TASK [netdata : Copy DLE plugin configuration for Netdata] ***************************************************************************************************************************************************** changed: [root@95.216.139.61] TASK [netdata : Start Netdata container] *********************************************************************************************************************************************************************** changed: [root@95.216.139.61] TASK [netdata : Check Netdata is running] ********************************************************************************************************************************************************************** ok: [root@95.216.139.61] TASK [authorized-keys : Get the system username] *************************************************************************************************************************************************************** ok: [root@95.216.139.61] TASK [authorized-keys : Remove a temporary ssh public key from the server] ************************************************************************************************************************************* ok: [root@95.216.139.61] TASK [deploy-finish : DLE connection info] ********************************************************************************************************************************************************************* ok: [root@95.216.139.61] => { "msg": [ "DLE Server local URL: http://127.0.0.1:2345", "DLE UI local URL: http://127.0.0.1:2346", "DLE verification token: pZxSg5veeEnItytXbZER0SEButIYvicL", "Monitoring (Netdata) local URL: http://127.0.0.1:19999", "DLE public URL: Not available. The proxy server is not installed. Use SSH port forwarding to access DLE.", "SSH-port-forwarding:", " - For DLE API: 'ssh -N -L 2345:127.0.0.1:2345 root@95.216.139.61 -i YOUR_PRIVATE_KEY'", " - For DLE UI: 'ssh -N -L 2346:127.0.0.1:2346 root@95.216.139.61 -i YOUR_PRIVATE_KEY'", " - To monitoring tool (Netdata): 'ssh -N -L 19999:127.0.0.1:19999 root@95.216.139.61 -i YOUR_PRIVATE_KEY'" ] } RUNNING HANDLER [dle : Perform handler tasks] ****************************************************************************************************************************************************************** included: /Users/vitabaks/dle-se-ansible/roles/dle/tasks/restart_dle.yml for root@95.216.139.61 RUNNING HANDLER [dle : Restart DLE container] ****************************************************************************************************************************************************************** changed: [root@95.216.139.61] RUNNING HANDLER [dle : Wait for DLE port to be available] ****************************************************************************************************************************************************** ok: [root@95.216.139.61] RUNNING HANDLER [dle : Check that DLE is healthy] ************************************************************************************************************************************************************** ok: [root@95.216.139.61] RUNNING HANDLER [dle : Check that DLE UI is available] ********************************************************************************************************************************************************* ok: [root@95.216.139.61] PLAY RECAP ***************************************************************************************************************************************************************************************************** localhost : ok=16 changed=4 unreachable=0 failed=0 skipped=71 rescued=0 ignored=0 root@95.216.139.61 : ok=35 changed=18 unreachable=0 failed=0 skipped=20 rescued=0 ignored=0
Edited by Vitaliy Kukharik 1 - DLE image:
Collapse replies - Author Contributor
- Platform UI: After deploying DLE SE, the instances page is not displayed, a white screen
https://console-dev.postgres.ai/vitaliy-test/instances
Safari
No problem for main page
the rest of the pages also work.
Problem with "instances" page.
Chrome (Version 112.0.5615.137 (Official Build) (arm64))
Edited by Vitaliy Kukharik - Author Contributor
- SE UI:
- When I click on the "Test connection" button, nothing happens, I don't see any messages.
- The password is displayed in plain text. Is this how it should be when you first enter it?
- Author Contributor
- Maintainer
Thank you for reporting.
Fixed in !725 (merged)
- Maintainer
Problem with "instances" page.
The issue with blank page has also been solved, it was caused by an instance with an empty state object, I think until now this was a mandatory object.
Edited by Lasha Kakabadze - Author Contributor
Re-test with UI image:
registry.gitlab.com/postgres-ai/database-lab/ce-ui:501-se-bug-fixes
Platform UI - instances page
Test result: passed
SE UI:
Test result: passed
Thanks @Adrinlol
1 - Author Contributor
@Adrinlol I didn't notice right away, but the "Databases" field was specified twice
1 - Maintainer
Good catch! Should be resolved now.
- Author Contributor
- Author Contributor
Test 3. Enable Standard Edition (inactive billing)
Test result: not passed (there are comments)
- Change the
orgKey
parameter:
root@vitaliy-dle-test:~# nano .dblab/engine/configs/server.yml root@vitaliy-dle-test:~# grep orgKey .dblab/engine/configs/server.yml orgKey: "pdNQAHWitjCwgkmhToece5Vp___" root@vitaliy-dle-test:~# sudo docker restart dblab_server dblab_server
- skip snapshotting: billing is not active
- expected behavior
- Comment out the
orgKey
parameter:
root@vitaliy-dle-test:~# nano .dblab/engine/configs/server.yml root@vitaliy-dle-test:~# grep orgKey .dblab/engine/configs/server.yml #orgKey: "pdNQAHWitjCwgkmhToece5Vp" root@vitaliy-dle-test:~# sudo docker restart dblab_server dblab_server
- skip snapshotting: billing is not active
- No organization key provided. Once resolved, re-activate DLE
- expected behavior
try to create clone
- ERROR: {"code":"BAD_REQUEST","message":"billing is not active"}
- expected behavior
- Comment out the
platrform.url
parameter:
result:
2023/04/19 21:42:23 main.go:82: [INFO] Database Lab Instance ID: ch05o0blf3bs73c6phdg 2023/04/19 21:42:23 main.go:83: [INFO] Database Lab Engine version: v3.4.0-rc.2-20230419-2038 2023/04/19 21:42:23 main.go:88: [ERROR] failed to create new Platform Client: platform.url in config must not be empty 2023/04/19 21:42:23 main.go:72: [INFO] If you have problems or questions, please contact Postgres.ai: https://postgres.ai/contact
Test result: passed
In this test, it was important to understand whether the functionality would be blocked with inactive billing (for example, due to an incorrect configuration).
There are additional comments.
Edited by Vitaliy Kukharik 1 1 - Change the
Collapse replies - Author Contributor
@Adrinlol @NikolayS @akartasov
Failed to post telemetry request: failed to perform request: unsuccessful status given: 424. Once resolved, re-activate DLE
I think we need a more meaningful error message (at the top of the page) when the value orgKey is incorrect
Something like:
"An incorrect organization key was provided. Once resolved, re-activate DLE"
Edited by Vitaliy Kukharik - Owner
@vitabaks – it's on me – I'll fix
telemetry_usage()
to do it. 1 - Owner
@vitabaks finxed on staging/-dev (commit: https://gitlab.com/postgres-ai/platform/-/commit/0ec67c8ce7514500e80051660b00fa008cc9d1aa?merge_request_iid=240) – please test again
Edited by Nikolay Samokhvalov - Author Contributor
root@vitaliy-dle-test:~# nano .dblab/engine/configs/server.yml root@vitaliy-dle-test:~# grep orgKey .dblab/engine/configs/server.yml orgKey: "pdNQAHWitjCwgkmhToece5Vp___" root@vitaliy-dle-test:~# root@vitaliy-dle-test:~# sudo docker restart dblab_server dblab_server
- Failed to post telemetry request: failed to perform request: unsuccessful status given: 400. Once resolved, re-activate DLE
- Author Contributor
@NikolayS as I understand it, the error text has not been corrected yet in case of an incorrect organization key
- Author Contributor
@akartasov @NikolayS Is this the expected behavior that DLE and UI don't start unless the "
platrform.url
" parameter is set?I think it would be convenient to see the corresponding error in the UI instead of not launching the container at all.
P.S. the same applies when commenting on the entire
platform
sectionEdited by Vitaliy Kukharik - Maintainer
@vitabaks Yes, DLE is expected not to start if platform.url is not set AND either of the keys (orgKey or accessToken) is defined.
If there is no Platform section at all, DLE will still start
- Author Contributor
If org_key is specified incorrectly, I get an error
Failed to post telemetry request: failed to perform request: unsuccessful status given: 400.
Edited by Vitaliy Kukharik 1 - Author Contributor
Yes, DLE is expected not to start if platform.url is not set AND either of the keys (orgKey or accessToken) is defined.
If possible, I think it should be modified so that the UI is launched in any case, and I, as a DLE user, could see the error text for which it is immediately clear in which part of the configuration the problem is.
1 - Author Contributor
If there is no Platform section at all, DLE will still start
I checked it, commented out the entire platform session and the UI was started and I see a quite informative error.
I expect something similar and if no url is given
#platform: # # Platform API URL. To work with Postgres.ai SaaS, keep it default # # ("https://postgres.ai/api/general"). # url: "https://v2.postgres.ai/api/general" # # Telemetry: anonymous statistics sent to Postgres.ai. # # Used to analyze DLE usage, it helps the DLE maintainers make decisions on product development. # # Please leave it enabled if possible – this will contribute to DLE development. # # The full list of data points being collected: https://postgres.ai/docs/database-lab/telemetry # enableTelemetry: true # # Project name # projectName: "vitaliy-dle-test" # # Organization key # orgKey: "pdNQAHWitjCwgkmhToece5Vp" # # Token for authorization in Platform API. This token can be obtained on # # the Postgres.ai Console: https://postgres.ai/console/YOUR_ORG_NAME/tokens # # This token needs to be kept in secret, known only to the administrator. # accessToken: "platform_access_token" # # # Enable authorization with personal tokens of the organization's members. # # If false: all users must use "verificationToken" value for any API request # # If true: "verificationToken" is known only to admin, users use their own tokens, # # and any token can be revoked not affecting others # enablePersonalTokens: true # # CI Observer configuration. root@vitaliy-dle-test:~# root@vitaliy-dle-test:~# docker restart dblab_server dblab_server root@vitaliy-dle-test:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ce36ba97cf39 registry.gitlab.com/postgres-ai/database-lab/ce-ui:501-se-bug-fixes "/docker-entrypoint.…" 27 seconds ago Up 25 seconds 2346/tcp, 127.0.0.1:2346->80/tcp dblab_embedded_ui_ch0oo56krj3c738n1br0 6dda943f7fe5 postgresai/extended-postgres:15 "docker-entrypoint.s…" 9 minutes ago Up 9 minutes 5432/tcp, 0.0.0.0:6001->6001/tcp, :::6001->6001/tcp dblab_clone_6001 4ada13ca47c1 postgresai/extended-postgres:15 "docker-entrypoint.s…" 25 minutes ago Up 25 minutes 5432/tcp, 0.0.0.0:6000->6000/tcp, :::6000->6000/tcp dblab_clone_6000 f0b9a5a6794d postgresai/netdata-for-dle:v1.37.1 "/usr/sbin/run.sh" 35 minutes ago Up 35 minutes (healthy) netdata 1f03248a7420 postgresai/dblab-server:3.4.0-rc.2 "docker-entrypoint.s…" 36 minutes ago Up 26 seconds 127.0.0.1:2345->2345/tcp dblab_server
1
- Author Contributor
@Adrinlol the "Reload info" button is not active, why?
Collapse replies - Maintainer
Fixed in !725 (merged)
- Author Contributor
- Owner
@akartasov @vitabaks I couldn't find
skipStartRefresh
in the list of changes here. Let's make sure we don't lose it in terms of testing, docs, etc. Collapse replies - Maintainer
@NikolayS it is here:
Clean up data during full data refresh in logical mode - !704 (merged)
Description: !704 (merged)
Docs: docs!556 (merged)
- Owner
thanks!
- Owner
I adjusted the title of it not to forget about skipStartRefresh
- Lasha Kakabadze created branch
501-se-bug-fixes
to address this issuecreated branch
501-se-bug-fixes
to address this issue - Lasha Kakabadze mentioned in merge request !725 (merged)
mentioned in merge request !725 (merged)
- Nikolay Samokhvalov changed the description
Compare with previous version changed the description
- Vitaliy Kukharik changed the description
Compare with previous version changed the description
- Author Contributor
Platform UI and SE
- (done) I think this part of the interface should be worked out in the near future. It should not be possible for SE to open these pages:
Right now there's just endless loading, since SE is not actually connected to the platform.
Related issue: #493 (closed)
- (fixed) When I click on my second (inactive) organization, I get a white page
https://console-dev.postgres.ai/vitaliy-test2
- (fixed) The billing page doesn't seem to work for my organization
https://console-dev.postgres.ai/vitaliy-test
- (fixed) The billing page (for new organization)
As far as I understand, the billing page still has a design for EE.
5.(done): Add a note about connecting to the DLE under the snippet.
Note: Run the snippet above. Once it's execution is finished, it will print details on how to connect to the DLE UI/API/CLI.
- (done): Change the "
Go back to the form
" button to "Go to the list of instances
"
When I click the button, I want to go back to the list of DLE instances.
7 .(done): If I specify more than 2 snapshots (default) we have to pass a
zpool_datasets_number
variable with the specified valueEdited by Vitaliy Kukharik 1 Collapse replies - Author Contributor
- Author Contributor
TODO: Describe the how to connect to the UI (after deployment)
I think we should describe a basic example of connecting to the local UI after deployment
something like:
Open UI
After deployment, to connect to your DLE:
- First, set up SSH port forwarding for port 2346:
ssh -N -L 2346:127.0.0.1:2346 user@server-ip-address # if needed, use -i to specify the private key
- Now UI should be available at http://127.0.0.1:2346
- DLE verification token:
tFDO3vkKg5nkFrnGvW1GdHByYOnyR871
Edited by Vitaliy Kukharik - First, set up SSH port forwarding for port 2346:
- Author Contributor
when discussing with @NikolayS , we came to the conclusion that it is not necessary to describe the details of the UI connection, but to rely on the ansible output, in which all this is described by the end of the deployment.
- Author Contributor
But it might be worth adding a comment
Note: Run the snippet above. Once it's execution is finished, it will print details on how to connect to the DLE UI/API/CLI.
this text is added below the snippet.
Edited by Vitaliy Kukharik - Author Contributor
- I think this part of the interface should be worked out in the near future. It should not be possible for SE to open these pages
For discussion:
Perhaps here, after clicking on instance, there may be a page with a brief example of how to connect to DLE for SE. And any other useful information.
- Author Contributor
- Author Contributor
@Adrinlol could you implement points 5 and 6? Thanks!
- Maintainer
I completely missed those points, thank you for reminding me.
Will implement them asap.
1 - Maintainer
Done!
- Author Contributor
@Adrinlol Thank you!
P.S. it's a matter of taste, but maybe we could tweak the font for this note a little. For example, I like this option
Edited by Vitaliy Kukharik - Author Contributor
@Adrinlol Could you implement step 1?
I think for now we can just disable the ability to open pages for SE, it's enough just to display the instance in the list. And from the items (on the right), leave only "remove"
- Maintainer
@vitabaks sorry for the delay.
Just pushed the change for the 1 and 7 steps.
Edited by Lasha Kakabadze - Author Contributor
- Author Contributor
7 - tested
checked for all clouds.
result:
TASK [zpool : Create datasets] ********************************************************************************************************************************************************************************** changed: [root@198.199.75.182] => (item=dataset_1) changed: [root@198.199.75.182] => (item=dataset_2) changed: [root@198.199.75.182] => (item=dataset_3) changed: [root@198.199.75.182] => (item=dataset_4) changed: [root@198.199.75.182] => (item=dataset_5)
zfs
root@vitaliy-dle-do:~# zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT dblab_pool 49.5G 294K 49.5G - - 0% 0% 1.00x ONLINE - root@vitaliy-dle-do:~# root@vitaliy-dle-do:~# zfs list NAME USED AVAIL REFER MOUNTPOINT dblab_pool 294K 48.0G 25K /var/lib/dblab/dblab_pool dblab_pool/dataset_1 24K 48.0G 24K /var/lib/dblab/dblab_pool/dataset_1 dblab_pool/dataset_2 24K 48.0G 24K /var/lib/dblab/dblab_pool/dataset_2 dblab_pool/dataset_3 24K 48.0G 24K /var/lib/dblab/dblab_pool/dataset_3 dblab_pool/dataset_4 24K 48.0G 24K /var/lib/dblab/dblab_pool/dataset_4 dblab_pool/dataset_5 24K 48.0G 24K /var/lib/dblab/dblab_pool/dataset_5
Edited by Vitaliy Kukharik 1
- Author Contributor
Provision: GCP - Invalid image
fixed
TASK [provision : GCP: Create or modify a instance dle-test] *************************************************************************************************************************************************** fatal: [localhost]: FAILED! => {"changed": false, "msg": "GCP returned error: {'error': {'code': 400, 'message': \"Invalid value for field 'resource.disks[0].initializeParams.sourceImage': 'ubuntu-2204-jammy-v20230214'. The URL is malformed.\", 'errors': [{'message': \"Invalid value for field 'resource.disks[0].initializeParams.sourceImage': 'ubuntu-2204-jammy-v20230214'. The URL is malformed.\", 'domain': 'global', 'reason': 'invalid'}]}}", "request": {"body": "{\"kind\": \"compute#instance\", \"disks\": [{\"autoDelete\": true, \"boot\": true, \"deviceName\": \"dle-test-system\", \"initializeParams\": {\"diskName\": \"dle-test-system\", \"diskSizeGb\": 40, \"sourceImage\": \"ubuntu-2204-jammy-v20230214\"}, \"type\": \"\"}, {\"autoDelete\": true, \"deviceName\": \"dle-test-storage\", \"initializeParams\": {\"diskName\": \"dle-test-storage\", \"diskSizeGb\": 100}, \"type\": \"\"}], \"metadata\": {\"items\": [{\"key\": \"ssh-keys\", \"value\": \"root:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQClY/l7OZCh4xS+JghNNGLHPvuro40vGYSf7+/DnGO9feCaLXYETsaLII6DV3yZ91hQEeMfduG+10bGtqq9i/wP4RyPRZyTv2DG723p+ohdL+K76xfJI/kUfqBQjlfRms9ZiSsSoHZxWKx2ktWGw4fkoohYVbZRM0AFCiYaTAh6sxhPIOY3JgnRksvzEdTS4j5uwLUux9urGcHmKbFz00GlbNiHITtTXtIubwFHyqug+EB/5QOjqydLjv+GP4UgZRIeGKZ9GWK14Ys+P3XpXAxkjdRkY/g4KsP8IHWT7Hv63znwIqd5ALCqQj0dhr1nJi2HXT+WZ+et2tN9OH4xzchJ dle_key_tmp\"}]}, \"machineType\": \"https://www.googleapis.com/compute/v1/projects/475944144529/zones/europe-west4-b/machineTypes/n2-standard-4\", \"name\": \"dle-test\", \"networkInterfaces\": [{\"accessConfigs\": [{\"name\": \"External NAT\", \"type\": \"ONE_TO_ONE_NAT\"}], \"network\": \"global/networks/default\"}]}", "method": "POST", "url": "https://compute.googleapis.com/compute/v1/projects/475944144529/zones/europe-west4-b/instances"}}
@NikolayS please change the image name
ubuntu-2204-jammy-v20230214->projects/ubuntu-os-cloud/global/images/family/ubuntu-2204-lts
Edited by Vitaliy Kukharik 1 Collapse replies - Owner
@vitabaks this is fixed, please check
- Author Contributor
Tested
result:
PLAY [Provision of cloud resources (virtual machine + disk) for DLE (on GCP)] *********************************************************************************************************************************** TASK [Gathering Facts] ****************************************************************************************************************************************************************************************** ok: [localhost] TASK [provision : Generate a new temporary ssh key to access the server when deploying DLE] ********************************************************************************************************************* changed: [localhost] TASK [provision : set_fact: ssh_key_name and ssh_key_content] *************************************************************************************************************************************************** ok: [localhost] TASK [provision : Make sure the python3-pip package are present on controlling host] **************************************************************************************************************************** ok: [localhost -> 127.0.0.1] TASK [provision : Install google-auth dependency on controlling host] ******************************************************************************************************************************************* ok: [localhost -> 127.0.0.1] TASK [provision : GCP: Gather information about project] ******************************************************************************************************************************************************** ok: [localhost] TASK [provision : GCP: Create or modify a instance vitally-dle-gcp] ********************************************************************************************************************************************* changed: [localhost] TASK [provision : Show GCP instance info] *********************************************************************************************************************************************************************** ok: [localhost] => { "msg": [ "instance ID is 48479171611347152", "instance Name is vitally-dle-gcp", "instance Image is ubuntu-2204-lts", "instance Type is n2-highmem-2", "Block Storage Size is 100 gigabytes", "Public IP is ********", "Private IP is 10.208.0.2" ] }
Thanks @NikolayS
Edited by Vitaliy Kukharik
- Vitaliy Kukharik changed the description
Compare with previous version changed the description
- Vitaliy Kukharik marked the checklist item (test 1, 2, 3) Enable Standard Edition - !693 (merged), !723 (merged) as completed
marked the checklist item (test 1, 2, 3) Enable Standard Edition - !693 (merged), !723 (merged) as completed
- Author Contributor
Add an ability to ignore errors that occurred during logical data dump/restore
Test result: not passed (upd: fixed)
DLE log
... 2023/04/21 19:47:32 dump.go:553: [DEBUG] Postgres has been started 2023/04/21 19:47:33 dump.go:474: [INFO] Running cleanup command: [rm -rf /var/lib/dblab/dblab_pool/dataset_1/dump/test2 /var/lib/dblab/dblab_pool/dataset_1/dump/postgres /var/lib/dblab/dblab_pool/dataset_1/dump/test /var/lib/dblab/dblab_pool/dataset_1/dump/test3] 2023/04/21 19:47:33 dump.go:503: [INFO] Running dump command: [pg_dump --create --host db.scshghslgzunteqlufuy.supabase.co --port 5432 --username postgres --dbname test3 --jobs 4 --format directory --file /var/lib/dblab/dblab_pool/dataset_1/dump/test3] 2023/04/21 19:47:43 dump.go:517: [INFO] Dumping job for the database "test3" has been finished 2023/04/21 19:47:43 dump.go:503: [INFO] Running dump command: [pg_dump --create --host db.scshghslgzunteqlufuy.supabase.co --port 5432 --username postgres --dbname test2 --jobs 4 --format directory --file /var/lib/dblab/dblab_pool/dataset_1/dump/test2] 2023/04/21 19:47:51 dump.go:517: [INFO] Dumping job for the database "test2" has been finished 2023/04/21 19:47:51 dump.go:503: [INFO] Running dump command: [pg_dump --create --host db.scshghslgzunteqlufuy.supabase.co --port 5432 --username postgres --dbname postgres --jobs 4 --format directory --file /var/lib/dblab/dblab_pool/dataset_1/dump/postgres] 2023/04/21 19:48:00 util.go:37: [DEBUG] Response: &{0xc00012aac0 {v3.4.0-rc.2-20230419-2038 standard 0xc0000e7180 2023-04-21 19:43:41 +0000 UTC 0xc0000e7181 0xc0000e7182} [{dblab_pool/dataset_1 zfs 0001-01-01 00:00:00 +0000 UTC empty [] {zfs 30685528064 30685299712 228352 24064 0 202752 1.06}} {dblab_pool/dataset_2 zfs 0001-01-01 00:00:00 +0000 UTC refreshing [] {zfs 30685528064 30685299712 228352 12288 0 202752 1.06}}] {0 0 []} {logical refreshing 2023-04-21 19:47:09 +0000 UTC 2023-04-21 20:00:00 +0000 UTC map[] <nil>} {postgresai/extended-postgres:15 map[shm-size:1gb]} 0xc000590cc0} 2023/04/21 19:48:00 mode_local.go:309: [INFO] no snapshots for pool dblab_pool/dataset_2 2023/04/21 19:48:00 util.go:37: [DEBUG] Response: [] 2023/04/21 19:48:17 dump.go:517: [INFO] Dumping job for the database "postgres" has been finished 2023/04/21 19:48:17 dump.go:503: [INFO] Running dump command: [pg_dump --create --host db.scshghslgzunteqlufuy.supabase.co --port 5432 --username postgres --dbname test --jobs 4 --format directory --file /var/lib/dblab/dblab_pool/dataset_1/dump/test] 2023/04/21 19:48:36 dump.go:517: [INFO] Dumping job for the database "test" has been finished 2023/04/21 19:48:36 tools.go:455: [INFO] Removing container ID: 1221f906d13d62809c0905681f85333e63693d0fb510469b57abd78f25e03d7a ... 2023/04/21 19:49:28 dump.go:553: [DEBUG] Postgres has been started 2023/04/21 19:49:28 restore.go:478: [INFO] TOC file not found: stat /var/lib/dblab/dblab_pool/dataset_1/dump/toc.dat: no such file or directory. Skip directory: /var/lib/dblab/dblab_pool/dataset_1/dump 2023/04/21 19:49:28 restore.go:429: [INFO] Directory dump not found in "/var/lib/dblab/dblab_pool/dataset_1/dump" 2023/04/21 19:49:28 restore.go:437: [DEBUG] Explore: postgres 2023/04/21 19:49:28 restore.go:482: [INFO] TOC file has been found: "/var/lib/dblab/dblab_pool/dataset_1/dump/postgres/toc.dat" 2023/04/21 19:49:28 restore.go:365: [INFO] Extract database name: pg_restore --list /var/lib/dblab/dblab_pool/dataset_1/dump/postgres | grep dbname: | tr -d '[;]' 2023/04/21 19:49:28 restore.go:450: [INFO] Found the directory dump: postgres 2023/04/21 19:49:28 restore.go:437: [DEBUG] Explore: test 2023/04/21 19:49:28 restore.go:482: [INFO] TOC file has been found: "/var/lib/dblab/dblab_pool/dataset_1/dump/test/toc.dat" 2023/04/21 19:49:28 restore.go:365: [INFO] Extract database name: pg_restore --list /var/lib/dblab/dblab_pool/dataset_1/dump/test | grep dbname: | tr -d '[;]' 2023/04/21 19:49:28 restore.go:450: [INFO] Found the directory dump: test 2023/04/21 19:49:28 restore.go:437: [DEBUG] Explore: test2 2023/04/21 19:49:28 restore.go:482: [INFO] TOC file has been found: "/var/lib/dblab/dblab_pool/dataset_1/dump/test2/toc.dat" 2023/04/21 19:49:28 restore.go:365: [INFO] Extract database name: pg_restore --list /var/lib/dblab/dblab_pool/dataset_1/dump/test2 | grep dbname: | tr -d '[;]' 2023/04/21 19:49:28 restore.go:450: [INFO] Found the directory dump: test2 2023/04/21 19:49:28 restore.go:437: [DEBUG] Explore: test3 2023/04/21 19:49:28 restore.go:482: [INFO] TOC file has been found: "/var/lib/dblab/dblab_pool/dataset_1/dump/test3/toc.dat" 2023/04/21 19:49:28 restore.go:365: [INFO] Extract database name: pg_restore --list /var/lib/dblab/dblab_pool/dataset_1/dump/test3 | grep dbname: | tr -d '[;]' 2023/04/21 19:49:28 restore.go:450: [INFO] Found the directory dump: test3 2023/04/21 19:49:28 restore.go:260: [DEBUG] Database List to restore: map[postgres:{[] [] directory postgres} test:{[] [] directory test} test2:{[] [] directory test2} test3:{[] [] directory test3}] 2023/04/21 19:49:28 restore.go:505: [INFO] Running restore command for test3 [pg_restore --username postgres --dbname postgres --create --jobs 4 /var/lib/dblab/dblab_pool/dataset_1/dump/test3 --no-tablespaces --no-privileges --no-owner --exit-on-error] 2023/04/21 19:49:29 restore.go:514: [ERROR] Restore command failed: pg_restore: error: could not execute query: ERROR: extension "pg_graphql" is not available 2023/04/21 19:49:29 logs.go:171: [DEBUG] Collecting postgres logs from container dblab_lr_ch1eeb40q69s73cfdh00 /var/lib/dblab/dblab_pool/dataset_2/data 2023/04/21 19:49:29 tools.go:399: [INFO] Container logs: 2023/04/21 19:49:29 tools.go:426: [INFO] Postgres logs are not present here; to troubleshoot, check (/root/.dblab/engine/logs) on DLE machine. 2023/04/21 19:49:29 tools.go:455: [INFO] Removing container ID: 17a981d40b2415e67218e80217da64967f52a7b159c449b459e6d8bd0c6a29e2 2023/04/21 19:49:59 tools.go:461: [INFO] Container "17a981d40b2415e67218e80217da64967f52a7b159c449b459e6d8bd0c6a29e2" has been stopped 2023/04/21 19:49:59 tools.go:472: [INFO] Container "17a981d40b2415e67218e80217da64967f52a7b159c449b459e6d8bd0c6a29e2" has been removed 2023/04/21 19:49:59 config.go:91: [ERROR] failed to refresh data: failed to restore a database: failed to exec restore command: exit code: 1. Output: pg_restore: error: could not execute query: ERROR: extension "pg_graphql" is not available 2023/04/21 19:50:00 util.go:37: [DEBUG] Response: &{0xc0001baba0 {v3.4.0-rc.2-20230419-2038 standard 0xc000478620 2023-04-21 19:43:41 +0000 UTC 0xc000478621 0xc000478622} [{dblab_pool/dataset_1 zfs 0001-01-01 00:00:00 +0000 UTC empty [] {zfs 30685528064 30676452864 9075200 5949440 0 9049600 4.54}} {dblab_pool/dataset_2 zfs 0001-01-01 00:00:00 +0000 UTC empty [] {zfs 30685528064 30676452864 9075200 34007040 0 9049600 4.54}}] {0 0 []} {logical failed 2023-04-21 19:47:09 +0000 UTC 2023-04-21 20:00:00 +0000 UTC map[refresh_failed:{error failed to restore a database: failed to exec restore command: exit code: 1. Output: pg_restore: error: could not execute query: ERROR: extension "pg_graphql" is not available 2023/04/21 19:50:00 util.go:37: [DEBUG] Response: []
- "--exit-on-error" option in pg_restore command
- not expected behavior
DLE config:
spec: logicalDump: options: !!merge <<: *db_container dumpLocation: "/var/lib/dblab/dblab_pool/dataset_1/dump" source: type: remote connection: dbname: postgres host: db.scshghslgzunteqlufuy.supabase.co port: 5432 username: postgres password: postgres-pass databases: {} parallelJobs: 4 ignoreErrors: false customOptions: [] logicalRestore: options: !!merge <<: *db_container dumpLocation: "/var/lib/dblab/dblab_pool/dataset_1/dump" parallelJobs: 4 ignoreErrors: false !!merge <<: *db_configs queryPreprocessing: queryPath: "" maxParallelWorkers: 2 inline: "" customOptions: - --no-tablespaces - --no-privileges - --no-owner - --exit-on-error
- ignoreErrors: false - despite the fact that the UI was chosen to ignore errors
- not expected behavior
root@dle-test:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d07518d9b4ff registry.gitlab.com/postgres-ai/database-lab/ce-ui:501-se-bug-fixes "/docker-entrypoint.…" 10 minutes ago Up 10 minutes 2346/tcp, 127.0.0.1:2346->80/tcp dblab_embedded_ui_ch1eeb40q69s73cfdh00 76e42eb27ce2 postgresai/netdata-for-dle:v1.37.1 "/usr/sbin/run.sh" 10 minutes ago Up 10 minutes (healthy) netdata 8ff524664120 postgresai/dblab-server:3.4.0-rc.2
Edited by Vitaliy Kukharik 1 Collapse replies - Author Contributor
@Adrinlol please check
I'm sure I checked the boxes, but the configuration has not been changed, when I visit the configuration page again, I see that they are not marked
- Author Contributor
Re-try
result:
spec: logicalDump: options: !!merge <<: *db_container dumpLocation: "/var/lib/dblab/dblab_pool/dataset_1/dump" source: type: remote connection: dbname: postgres host: db.scshghslgzunteqlufuy.supabase.co port: 5432 username: postgres password: postgres-pass databases: {} parallelJobs: 4 ignoreErrors: false customOptions: [] logicalRestore: options: !!merge <<: *db_container dumpLocation: "/var/lib/dblab/dblab_pool/dataset_1/dump" parallelJobs: 4 ignoreErrors: false
Edited by Vitaliy Kukharik - Maintainer
- Author Contributor
cc @akartasov
- Maintainer
@vitabaks these properties are not supported for configuration using the UI, only editing in the config file is available.
- Author Contributor
In this case, the test cannot be considered passed because the UI has the option of setting a check mark to ignore the error, but in fact it does not change the configuration of the DLE.
@akartasov @Adrinlol Perhaps we could refine this feature because I think this is a pretty important parameter?
cc @NikolayS
- Maintainer
@vitabaks Fixed: !730 (merged)
- Author Contributor
Re-test
- DLE image:
registry.gitlab.com/postgres-ai/database-lab/dblab-server:ignore-errors-ui-config
- UI image:
postgresai/ce-ui:v3.4.0-rc.3
Test result: passed (there is a comment)
DLE conf
spec: logicalDump: options: !!merge <<: *db_container dumpLocation: "/var/lib/dblab/dblab_pool/dataset_1/dump" source: type: remote connection: dbname: postgres host: db.scshghslgzunteqlufuy.supabase.co port: 5432 username: postgres password: postgres-pass databases: {} parallelJobs: 4 ignoreErrors: true customOptions: [] logicalRestore: options: !!merge <<: *db_container dumpLocation: "/var/lib/dblab/dblab_pool/dataset_1/dump" parallelJobs: 4 ignoreErrors: true !!merge <<: *db_configs queryPreprocessing: queryPath: "" maxParallelWorkers: 2 inline: "" customOptions: - --no-tablespaces - --no-privileges - --no-owner - --exit-on-error
- ignoreErrors: true
DLE log
2023/04/27 18:04:07 restore.go:505: [INFO] Running restore command for postgres [pg_restore --username postgres --dbname postgres --jobs 4 /var/lib/dblab/dblab_pool/dataset_1/dump/postgres --no-tablespaces --no-privileges --no-owner --exit-on-error] 2023/04/27 18:04:07 restore.go:520: [DEBUG] Output of the restore command: pg_restore: error: could not execute query: ERROR: extension "pgsodium" is not available 2023/04/27 18:04:07 restore.go:651: [DEBUG] Running a restore metadata command: [sh -c pg_restore --list /var/lib/dblab/dblab_pool/dataset_1/dump/postgres | head -n 10] 2023/04/27 18:04:07 restore.go:642: [INFO] Data state at: 20230427180240 2023/04/27 18:04:07 restore.go:505: [INFO] Running restore command for test [pg_restore --username postgres --dbname postgres --create --jobs 4 /var/lib/dblab/dblab_pool/dataset_1/dump/test --no-tablespaces --no-privileges --no-owner --exit-on-error] 2023/04/27 18:04:13 restore.go:651: [DEBUG] Running a restore metadata command: [sh -c pg_restore --list /var/lib/dblab/dblab_pool/dataset_1/dump/test | head -n 10] 2023/04/27 18:04:13 restore.go:642: [INFO] Data state at: 20230427180041 2023/04/27 18:04:13 restore.go:505: [INFO] Running restore command for test2 [pg_restore --username postgres --dbname postgres --create --jobs 4 /var/lib/dblab/dblab_pool/dataset_1/dump/test2 --no-tablespaces --no-privileges --no-owner --exit-on-error] 2023/04/27 18:04:13 restore.go:651: [DEBUG] Running a restore metadata command: [sh -c pg_restore --list /var/lib/dblab/dblab_pool/dataset_1/dump/test2 | head -n 10] 2023/04/27 18:04:14 restore.go:642: [INFO] Data state at: 20230427180229 2023/04/27 18:04:14 restore.go:505: [INFO] Running restore command for test3 [pg_restore --username postgres --dbname postgres --create --jobs 4 /var/lib/dblab/dblab_pool/dataset_1/dump/test3 --no-tablespaces --no-privileges --no-owner --exit-on-error] 2023/04/27 18:04:14 restore.go:520: [DEBUG] Output of the restore command: pg_restore: error: could not execute query: ERROR: extension "pg_graphql" is not available 2023/04/27 18:04:14 restore.go:651: [DEBUG] Running a restore metadata command: [sh -c pg_restore --list /var/lib/dblab/dblab_pool/dataset_1/dump/test3 | head -n 10] 2023/04/27 18:04:14 restore.go:642: [INFO] Data state at: 20230427180218 2023/04/27 18:04:14 restore.go:273: [INFO] Running analyze command: [vacuumdb --analyze --jobs 4 --username postgres --all] 2023/04/27 18:04:16 tools.go:285: [INFO] Run checkpoint command [psql -U postgres -d postgres -XAtc checkpoint] 2023/04/27 18:04:20 tools.go:297: [INFO] Checkpoint result: CHECKPOINT 2023/04/27 18:04:20 tools.go:312: [INFO] Stopping PostgreSQL instance [/usr/lib/postgresql/15/bin/pg_ctl -D /var/lib/dblab/dblab_pool/dataset_2/data -w --timeout 600 stop] 2023/04/27 18:04:20 restore.go:290: [INFO] Restoring job has been finished
- the "exit-on-error" option is present
- not expected behavior
- the snapshot was created
Check clone:
root@vitaliy-dle-test:~# psql "host=localhost port=6000 user=test dbname=postgres" Password for user test: psql (15.2 (Ubuntu 15.2-1.pgdg22.04+1)) Type "help" for help. postgres=# \l+ List of databases Name | Owner | Encoding | Collate | Ctype | ICU Locale | Locale Provider | Access privileges | Size | Tablespace | Description -----------+----------+----------+------------+------------+------------+-----------------+-----------------------+---------+------------+-------------------------------------------- postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 | | libc | | 7621 kB | pg_default | default administrative connection database template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | | libc | =c/postgres +| 7297 kB | pg_default | unmodifiable empty database | | | | | | | postgres=CTc/postgres | | | template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | | libc | =c/postgres +| 7473 kB | pg_default | default template for new databases | | | | | | | postgres=CTc/postgres | | | test | postgres | UTF8 | C.UTF-8 | C.UTF-8 | | libc | | 307 MB | pg_default | test2 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | | libc | | 7465 kB | pg_default | test3 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | | libc | | 7465 kB | pg_default | (6 rows) postgres=# \c test You are now connected to database "test" as user "test". test=# \dt List of relations Schema | Name | Type | Owner --------+------------------+-------+---------- public | pgbench_accounts | table | postgres public | pgbench_branches | table | postgres public | pgbench_history | table | postgres public | pgbench_tellers | table | postgres (4 rows) test=# \c postgres You are now connected to database "postgres" as user "test".
Edited by Vitaliy Kukharik 1 - DLE image:
- Author Contributor
the "exit-on-error" option is present
could you remove this option from the pg_restore command if the ignore errors option is set?
This is necessary in order not to interrupt the recovery process at the first error, but to continue restoring data. In this case, there will be more chances to restore the data.
For example, if there is no extension in the docker image, but the data does not depend on this extension, in this case we will get the restored data (without this extension in the database).
cc @NikolayS
- Maintainer
@vitabaks The code doesn't contain this option. Remove
--exit-on-error
from your config file - Author Contributor
we can't handle this exception and not add the option even if we forgot to remove it from the configuration if we chose to ignore the error?
- Maintainer
Why not vice versa? Why should a boolean flag in a configuration take precedence over a user-entered option (
--exit-on-error
)? - Author Contributor
Because it's logical. If we chose to ignore errors, then the parameter (--exit-on-error) is superfluous.
- Vitaliy Kukharik changed the description
Compare with previous version changed the description