Skip to content

Initial vhost-net support

David Woodhouse requested to merge vhost into master

As noted in #263 (comment 612070454) this gives me about a 10% improvement in bandwidth by offloading the tun I/O to vhost.

As discussed at https://lore.kernel.org/netdev/2433592d2b26deec33336dd3e83acfd273b0cf30.camel@infradead.org/T/#e1227497a0fe79bf4f9927d20072c14ff29220201 there is a slight cost to latency (although it ended up being even less than reported in that thread, after I shifted the eventfd_read() on the vhost_call_fd to happen after doing all the real work of moving packets, instead of beforehand.)

There is already a tension between users who want high bandwidth + high queue lengths (like @zez3), and the majority of users who just have normal "home to corporate VPN" style bandwidth and who want to use VoIP over the VPN and care about latency.

So for now we just use the queue length to trigger the vhost-net optimisation; if it's 16 or more (the default is 10, which is perfectly sufficient to saturate a single Gigabit Ethernet with iperf traffic) then we enable vhost-net.

To keep latency down, we only send using vhost-net if the TX queue is at least half full. Before that we just write directly to the tun device anyway.

Merge request reports