Logical, pool rotation – unblock the refresh process
Goal
DLE admin should be able to choose one of the following policies for refresh process when all pools are busy with clones:
- (default) "do nothing" Do nothing, if all pools are busy (have clones) – don't perform refresh, emit an error to logs / monitoring
- "force-destroy" Force destroying the existing clones on the active pool candidate. Users have to be notified about this (this can be implemented in another issue if it makes more sense)
- "force-reset" Forced reset of the ones of the active pool candidate, switching to the newest available database version (it will be the "previous" one once refresh is complete). Delta is to be lost. Optionally, users are notified about this.
TODO / How to implement
Add a new parameter unblockingPolicy
to the refresh
section:
retrieval:
refresh:
timetable: "0 0 * * 1"
unblockingPolicy: "skip" # Available options: "skip", "force-destroy", "force-reset"
Update the logic in the retrieval
component, function preparePoolToRefresh
- (https://gitlab.com/postgres-ai/database-lab/-/blob/v3.1.0/engine/internal/retrieval/retrieval.go).
There no option to reset the state using pool.FSManager
as this process requires additional actions to adjust clone data.
It seems the possible way to reset a clone to the latest snapshot is to inject the Cloning
service into the Retrieval
service. This way might be required a lot of refactoring actions.
Also, take into account the issue about global DLE redesign: #362
- "skip" - the default option without changes in the current logic - skip refreshing.
- "force-destroy" - destroy all clones (use
DestroyClone()
fromCloning
), check the existing clone list again and run the pool refresh process. - "force-reset" - reset all clones to new snapshots (use
ResetClone ()
fromCloning
), check the existing clone list again and run the pool refresh process.
Acceptance criteria
There are options available to unblock the refresh process and prevent outdated snapshots.
Warn DLE admin that current clone changes will be lost.