qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [QA-virtio]:Why vring size is limited to 1024?


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [QA-virtio]:Why vring size is limited to 1024?
Date: Wed, 8 Oct 2014 12:13:45 +0300

On Wed, Oct 08, 2014 at 04:07:47PM +0800, Zhangjie (HZ) wrote:
> MST, Thanks very much, I get it.
> 
> On 2014/10/8 15:37, Michael S. Tsirkin wrote:
> > On Wed, Oct 08, 2014 at 03:17:56PM +0800, Zhangjie (HZ) wrote:
> >> Thanks for your patient answer! :-)
> >>
> >> On 2014/9/30 17:33, Michael S. Tsirkin wrote:
> >>> On Tue, Sep 30, 2014 at 04:36:00PM +0800, Zhangjie (HZ) wrote:
> >>>> Hi,
> >>>> There exits packets loss when we do packet forwarding in VM,
> >>>> especially when we use dpdk to do the forwarding. By enlarging vring
> >>>> can alleviate the problem.
> >>>
> >>> I think this has to do with the fact that dpdk disables
> >>> checksum offloading, this has the side effect of disabling
> >>> segmentation offloading.
> >>>
> >>> Please fix dpdk to support checksum offloading, and
> >>> I think the problem will go away.
> >> In some application scene, loss of udp packets are not allowed,
> >>  and udp packets are always short than mtu.
> >> So, we need to support high pps(eg.0.3M Packets/s) forwarding, and
> >> offloading cannot fix it.
> > 
> > That's the point. With UFO you get larger than MTU UDP packets:
> > http://www.linuxfoundation.org/collaborate/workgroups/networking/ufo
> Then vm only do forwarding, and not create new packets itself.
> As we can not gro normal udp packets, when udp packets come from the nic of 
> host, ufo cannot work.

This is something I've been thinking about for a while now.
We really should add GRO-like path for UDP, this isn't
too different from UDP.

LRO can often work with UDP too, but linux discards too much
info on LRO, but if you are doing drivers in userspace
you might be able to support this.

> > 
> > Additionally, checksum offloading reduces CPU utilization
> > and reduces the number of data copies, allowing higher pps
> > with smaller buffers.
> > 
> > It might look like queue depth helps performance for netperf, but in
> > real-life workloads the latency under load will suffer, with more
> > protocols implementing tunnelling on top of UDP such extreme bufferbloat
> > will not be tolerated.
> > 
> >>>
> >>>
> >>>> But now vring size is limited to 1024 as follows:
> >>>> VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
> >>>>                             void (*handle_output)(VirtIODevice *, 
> >>>> VirtQueue *))
> >>>> {
> >>>>  ...
> >>>>  if (i == VIRTIO_PCI_QUEUE_MAX || queue_size > VIRTQUEUE_MAX_SIZE)
> >>>>         abort();
> >>>> }
> >>>> ps:#define VIRTQUEUE_MAX_SIZE 1024
> >>>> I delete the judgement code, and set vring size to 2048,
> >>>> VM can be successfully started, and the network is ok too.
> >>>> So, Why vring size is limited to 1024 and what is the influence?
> >>>>
> >>>> Thanks!
> >>>
> >>> There are several reason for this limit.
> >>> First guest has to allocate descriptor buffer which is 16 * vring size.
> >>> With 1K size that is already 16K which might be tricky to
> >>> allocate contigiously if memory is fragmented when device is
> >>> added by hotplug.
> >> That is very
> >>> The second issue is that we want to be able to implement
> >>> the device on top of linux kernel, and
> >>> a single descriptor might use all of
> >>> the virtqueue. In this case we wont to be able to pass the
> >>> descriptor directly to linux as a single iov, since
> >>> that is limited to 1K entries.
> >> For the second issue, I wonder if it is ok to set vring size of virtio-net 
> >> to large than 1024,
> >> as for net work, there is at most 18 pages for a skb, it will not exceed 
> >> iov.
> >>>
> >>>> -- 
> >>>> Best Wishes!
> >>>> Zhang Jie
> >>> .
> >>>
> >>
> >> -- 
> >> Best Wishes!
> >> Zhang Jie
> > .
> > 
> 
> -- 
> Best Wishes!
> Zhang Jie



reply via email to

[Prev in Thread] Current Thread [Next in Thread]