Skip to content

Implement CHIP-2023-04 Adaptive Blocksize Limit Algorithm

Calin Culianu requested to merge cculianu/bitcoin-cash-node:abla into master

Depends on: !1783 (merged) !1786 (merged) !1787 (merged) !1788 (merged) !1789 (merged) !1790 (merged) !1793 (merged) !1798 (merged)


Summary

Implements the Adaptive Blocksize Limit Algorithm for Bitcoin Cash.

Implementation Notes

  • Leveldb CBlockIndex data format has been slightly modified to append ABLA state info (if any) to the end of the record for a block index record. If the data is missing (say, because users downgraded the node with the upgrade activated, then upgraded back to latest again), the node makes efforts to reconstruct the missing data if available from the block files (may fail in corner cases on pruned nodes missing old blocks after the abla activation block). Not a huge deal in practice.
    • This data format change is backward compatible with previous versions of this software.
  • Block size checks now occur later in the pipeline: when connecting the block. This is because this is the only time we are 100% sure that the previous block's state is fully resolved (since now the block size limit depends on the previous block).
  • Looser size checks are done in CheckBlock() (2GB limit).
  • Stricter check are done in the net code: it applies a "lookahead guess" check so as to drop large block messages that are much larger than the 1024-block download window. This is done to avoid DoS corner cases.
    • The reason that CheckBlock() had to be "looser" and only enforce a 2GB limit is due to the fact that this function can be used in many cases, some of which involve out-of-order block checking in some code paths -- e.g. when loading external block files or doing a -reindex.
    • Net code can afford to be "tighter" with its checks since the node never asks for blocks >1024 ahead of the current tip.
  • The -excessiveblocksize configuration value has modified semantics pre-upgrade vs post-upgrade. After ABLA activates, it acts as a floor value as the base "minimum" max block size.
    • The rationale behind this semantic change is that we don't want users that happen to have specified excessiveblocksize=32000000 in their conf files to fall out of consensus.
    • It is proposed that after this MR is accepted, that we create a new arg, e.g. -fixedblocksize= which has the semantics of the pre-activation -excessiveblocksize= and which will peg the block size to a fixed value, ignoring ABLA (this might be useful for some tests, perhaps).
  • The ABLA algorithm for BCH is currently temporarily set to cap the max block size at 2GB. This is due to limitations in the p2p protocol (as well as the block data file format).
    • For this limitation to be lifted we need to do 3 things: (1) No longer support 32-bit architectures in BCH, (2) Fix the p2p protocol to allow for p2p messages larger than uint32_t, and (3) Fix the block data file format & leveldb block index format in BCHN to not use uint32_t for its size field.
    • Even with this 2GB cap: 32-bit nodes might still fail someday if blocks get over 1GiB. This is because blocks may take 2x-3x as much memory when unserialized into memory as they do when serialized, and 32-bit machines often cannot address more than 2GiB in a userspace process, and always cannot address more than 4GiB ever. This means we really do need to remove 32-bit support some day, if we want to scale BCH to >1GiB blocks.

Test Plan

  • ninja all check-all
  • ninja check-functional-longeronly
  • ninja check-bitcoin-upgrade-activated check-functional-upgrade-activated check-functional-upgrade-activated-longeronly
  • After merge, but before final release: Run this on chipnet, use/abuse it.
Edited by Calin Culianu

Merge request reports