• Eric Dumazet's avatar
    net: bql: add __netdev_tx_sent_queue() · 3e59020a
    Eric Dumazet authored
    When qdisc_run() tries to use BQL budget to bulk-dequeue a batch
    of packets, GSO can later transform this list in another list
    of skbs, and each skb is sent to device ndo_start_xmit(),
    one at a time, with skb->xmit_more being set to one but
    for last skb.
    
    Problem is that very often, BQL limit is hit in the middle of
    the packet train, forcing dev_hard_start_xmit() to stop the
    bulk send and requeue the end of the list.
    
    BQL role is to avoid head of line blocking, making sure
    a qdisc can deliver high priority packets before low priority ones.
    
    But there is no way requeued packets can be bypassed by fresh
    packets in the qdisc.
    
    Aborting the bulk send increases TX softirqs, and hot cache
    lines (after skb_segment()) are wasted.
    
    Note that for TSO packets, we never split a packet in the middle
    because of BQL limit being hit.
    
    Drivers should be able to update BQL counters without
    flipping/caring about BQL status, if the current skb
    has xmit_more set.
    
    Upper layers are ultimately responsible to stop sending another
    packet train when BQL limit is hit.
    
    Code template in a driver might look like the following :
    
    	send_doorbell = __netdev_tx_sent_queue(tx_queue, nr_bytes, skb->xmit_more);
    
    Note that __netdev_tx_sent_queue() use is not mandatory,
    since following patch will change dev_hard_start_xmit()
    to not care about BQL status.
    
    But it is highly recommended so that xmit_more full benefits
    can be reached (less doorbells sent, and less atomic operations as well)
    Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
    Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
    3e59020a
netdevice.h 146 KB