[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] virtio-net: configurable TX queue size
From: |
Jason Wang |
Subject: |
Re: [Qemu-devel] virtio-net: configurable TX queue size |
Date: |
Fri, 5 May 2017 10:27:13 +0800 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 |
On 2017年05月04日 18:58, Wang, Wei W wrote:
Hi,
I want to re-open the discussion left long time ago:
https://lists.gnu.org/archive/html/qemu-devel/2015-11/msg06194.html
, and discuss the possibility of changing the hardcoded (256) TX queue
size to be configurable between 256 and 1024.
Yes, I think we probably need this.
The reason to propose this request is that a severe issue of packet drops in
TX direction was observed with the existing hardcoded 256 queue size,
which causes performance issues for packet drop sensitive guest
applications that cannot use indirect descriptor tables. The issue goes away
with 1K queue size.
Do we need even more, what if we find 1K is even not sufficient in the
future? Modern nics has size up to ~8192.
The concern mentioned in the previous discussion (please check the link
above) is that the number of chained descriptors would exceed
UIO_MAXIOV (1024) supported by the Linux.
We could try to address this limitation but probably need a new feature
bit to allow more than UIO_MAXIOV sgs.
From the code, I think the number of the chained descriptors is limited to
MAX_SKB_FRAGS + 2 (~18), which is much less than UIO_MAXIOV.
This is the limitation of #page frags for skb, not the iov limitation.
Thanks
Please point out if I missed anything. Thanks.
Best,
Wei