Revisit the code that controls the number of txns to broadcast at a time, since it depends on -excessiveblocksize, which may or may not be an overcomplication
Currently the code that determines how many tx invs to send out at a time, which lives in net_processing.cpp
is determined by the following code in net_processing.cpp
, in the PeerLogicValidation::SendMessages
function looks like this:
const uint64_t invBroadcastInterval = config.GetInvBroadcastInterval();
// effective max number of transactions per inventory is
// rawRatePerSecond * blockSizeInMB * broadcastIntervalInSeconds (rounded up)
const uint64_t nMaxBroadcasts = std::ceil(
config.GetInvBroadcastRate()
* (config.GetExcessiveBlockSize() / 1000000.0)
* (std::max<uint64_t>(invBroadcastInterval, 1) / 1000.0));
This logic may be overly complex, and it does depend on the maximum block size. In the future, under an adaptive blocksize adjustment algorithm, this code would become dependent on the current tip's block size limit. That may or may not make this code slower/worse/less maintainable to have to query the current block size limit for the chain tip each time this function is called (which may end up needing to take a lock to do so!).
It might be worthwhile to revisit this code and see if we can simplify it to not depend on the blocksize limit, perhaps...?