Skip to content

Removed transaction INV broadcast delay and rate limiter

Summary

This commit removes the CLI args -txbroadcastinterval and -txbroadcastrate.

It also removes the code corresponding to these options, namely the logic controlling the random delay between receiving a transaction and broadcasting it onward to other peers.

The transaction broadcast rate limiter is also removed (which attempted to do 7 tps * blocksize_MB -> ~224 tps on mainnet by default).

This partially reverts !746 (merged) as well as ancient Core commits that added this facility such as: 5400ef6b.

Note that other INV types have never had either a delay introduced or a rate limit: namely DSProofs and blocks are not rate limited and are sent immediately. As such, transaction INV messages (~32-37 bytes per INV!) also can freely be sent instantly without rate limiting or poisson delay.

Closes #507.

Benefits

  • Better 0-conf security. If all nodes upgrade to a version containing this commit, double-spenders have even less time to do their evil.
  • Better broadcast reliability esp. under high-load scenarios (such as e.g. Cauldron and other txn-heavy businesses).
  • Less likely to hit orphan pool limits when relaying long unconf. chains, if txns go out to peers as quickly as they come in.
  • Reduced complexity in the codebase.
  • Restores txns as first-class citizens of the p2p network, up there with blocks, dsproofs, and other assorted messages which suffer from no rate limiting or delays!
  • BCHN was the only node software that had this txn delay facility. So mempool views between various implementations are more likely to agree now.

Drawbacks

  • I can't think of any. Honestly.
  • Claims were made by some, that this facility helped to "batch INVs" and it "reduces bandwidth dramatically".
    • I question that notion. INV messages are 37 bytes, of which 32 bytes is payload and 5 bytes is fixed overhead.
    • So yes, by adding extra delays you may force batching in some circumstances.
    • You end up sending 5 bytes less per INV if batched versus 1-off.
    • However, removing this delay does not mean we won't batch anymore!
    • Batching will still happen even with this change; just only under circumstances of heavier txn volume.
    • For low volume, we prefer to send right away anyway rather than wait for a "batch" that may never materialize.
      • Note that if txn volume is low you just introduced a delay to txn relay with no benefit. You end up sending 1-offs to peers anyway (+5 bytes for message envelope)..your having waited to batch did nothing -- it was all cost no benefit.

More Background

For BCH, our ultimate goal is very high tps throughput. As such, rate limiting or otherwise delaying txn relays seems in conflict with our philosophy as a chain.

0-conf security also benefits from txns relaying quickly across the network and the poisson delays seem to only make double-spending a mempool txn easier to pull off.

This facility was added by Core to reduce potential fingerprinting of nodes, and to presumably protect against some sort of imagined flooding.

However, since then most people do not run their node as a personal wallet anymore in modern times. Many people use wallet servers with middleware such as Fulcrum or Rostrum servers, and that provides very good anonymity.

Also note that a determined flooder can simply flood individual nodes with INVs anyway, even if this facility exists. Nodes can disconnect from people sending them junk.

And since transactions that validate are the only things that get relayed, they are not junk, and should be relayed post haste.

Thus, "flooding with INVs" is not really any kind of a big deal. INVs are tiny (32 bytes each) and really sending an entire mempool's full of legitimate INVs (120k of them) is only ~3MB. And you do it only once per peer ever worst-case. This is no worse than sending smallish blocks to peers (which we do all the time for IBD!!).

At any rate, this feature has been nuked by this MR. Many original XT and Classic developers have been recommending we remove it for years now.

Test Plan

  • ninja all check-all
  • Run a mainnet node (pruned or not) with: -debug=mempool -debug=net -logtimemicros
    • Observe the node's log .. as it gets txns from peers it should send them as quickly as possible as INV announcements to its other peers.
    • Maybe compare this to the delay you may observe before this MR where a node sees an INV, gets the txn, then waits a random amount of time around the 500msec mark before relaying.
    • In the new code it should wait up to 100msec at most (due to the way the network loop works), but is often quicker than that.

Sample output to look for:

2024-04-15T16:49:13.074387Z [msghand] received: tx (225 bytes) peer=88
2024-04-15T16:49:13.074631Z [msghand] AcceptToMemoryPool: peer=88: accepted d8e1456f4e08618e8377bfac8d9f6592ec267e4c05ca9901afe4973e686eb27d (poolsz 120 txn, 141 kB)
2024-04-15T16:49:13.178769Z [msghand] sending inv (37 bytes) peer=85
2024-04-15T16:49:13.179005Z [msghand] sending inv (37 bytes) peer=89
2024-04-15T16:49:13.179116Z [msghand] sending inv (37 bytes) peer=90
2024-04-15T16:49:13.179209Z [msghand] sending inv (37 bytes) peer=92
2024-04-15T16:49:13.179301Z [msghand] sending inv (37 bytes) peer=93
2024-04-15T16:49:13.179397Z [msghand] sending inv (37 bytes) peer=94
2024-04-15T16:49:22.026488Z [msghand] AcceptToMemoryPool: peer=88: accepted 2aa03f55a9ee1395b8b212c221a8d2b224cead60784f4754d0718028b8b8f9b9 (poolsz 121 txn, 142 kB)
2024-04-15T16:49:22.042455Z [msghand] received: inv (37 bytes) peer=84
2024-04-15T16:49:22.042689Z [msghand] sending inv (37 bytes) peer=84
2024-04-15T16:49:22.042898Z [msghand] sending inv (37 bytes) peer=85
2024-04-15T16:49:22.043211Z [msghand] sending inv (37 bytes) peer=89
2024-04-15T16:49:22.043432Z [msghand] sending inv (37 bytes) peer=90
2024-04-15T16:49:22.043558Z [msghand] sending inv (37 bytes) peer=92
2024-04-15T16:49:22.043706Z [msghand] sending inv (37 bytes) peer=94
Edited by Calin Culianu

Merge request reports