Storage obligations are not removed cleanly from host.db
Created by: nielscastien
New storage obligations (so's) are created in
storageobligatons.go whenever a new contract is created or when a contract is renewed. This is a 5 step process:
- Add SectorRoots to the host (for new contracts this is
- Add storage obligation to the database
- Update FinancialMetrics for the host
- Submit transaction to the transaction pool
- Queue the ActionItem in
Only if the so makes it in the
bucketActionItems of the host, this action is tracked in the future. Without an entry in
bucketActionItems, the so will never be updated or removed from the database.
There may be errors between the different steps leading to a return from the function. This will result in an inconsistent state of the
host.db since the database is updated and sectors added but the so will not make it into the
The big increase in contract count (and locked collateral) that some hosts experience, are because
managedAddStorageObligation is called by
managedFinalizeContract in a loop. Whenever there is an error, the financial metrics of the host are updated 6 times.
Own investigation of the
host.db (host is running approx. 10 months) reveals that > 20% of the so's with status
obligationUnresolved in the database never made it to the
bucketActionItems. I.e. for these items it holds that
so.proofDeadline() is smaller than current block height. These so's still count as active contracts, locked/risked collateral and potential revenues, leading to several other issues:
- it is almost impossible to do a clean shut down of the host because the contract number and locked collateral will never reach zero.
- locked collateral is used in other functions to see if the host can accept new contracts, leading to missed opportunities for the host if this locked collateral is based on 'rejected' so's.
- value for locked collateral, risked collateral and potential revenues are way off, making it impossible for the host to try to find the best settings.
As these errors are persistent, there is currently no way to correct this, e.g. via a restart of the daemon.