qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/2] virito-balloon: process all in sgs for free_page_vq


From: Jason Wang
Subject: Re: [PATCH 1/2] virito-balloon: process all in sgs for free_page_vq
Date: Fri, 26 Nov 2021 10:31:08 +0800

On Fri, Nov 26, 2021 at 9:21 AM Wang, Wei W <wei.w.wang@intel.com> wrote:
>
> On Friday, November 26, 2021 12:11 AM, David Hildenbrand wrote:
> > On 25.11.21 17:09, Michael S. Tsirkin wrote:
> > > On Thu, Nov 25, 2021 at 09:28:59AM +0100, David Hildenbrand wrote:
> > >> On 25.11.21 03:20, Jason Wang wrote:
> > >>> We only process the first in sg which may lead to the bitmap of the
> > >>> pages belongs to following sgs were not cleared. This may result
> > >>> more pages to be migrated. Fixing this by process all in sgs for
> > >>> free_page_vq.
> > >>>
> > >>> Signed-off-by: Jason Wang <jasowang@redhat.com>
> > >>> ---
> > >>>  hw/virtio/virtio-balloon.c | 7 +++++--
> > >>>  1 file changed, 5 insertions(+), 2 deletions(-)
> > >>>
> > >>> diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
> > >>> index c6962fcbfe..17de2558cb 100644
> > >>> --- a/hw/virtio/virtio-balloon.c
> > >>> +++ b/hw/virtio/virtio-balloon.c
> > >>> @@ -510,6 +510,7 @@ static bool get_free_page_hints(VirtIOBalloon
> > *dev)
> > >>>      VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > >>>      VirtQueue *vq = dev->free_page_vq;
> > >>>      bool ret = true;
> > >>> +    int i;
> > >>>
> > >>>      while (dev->block_iothread) {
> > >>>          qemu_cond_wait(&dev->free_page_cond,
> > &dev->free_page_lock);
> > >>> @@ -544,8 +545,10 @@ static bool get_free_page_hints(VirtIOBalloon
> > *dev)
> > >>>      }
> > >>>
> > >>>      if (elem->in_num && dev->free_page_hint_status ==
> > FREE_PAGE_HINT_S_START) {
> > >>> -        qemu_guest_free_page_hint(elem->in_sg[0].iov_base,
> > >>> -                                  elem->in_sg[0].iov_len);
> > >>> +        for (i = 0; i < elem->in_num; i++) {
> > >>> +            qemu_guest_free_page_hint(elem->in_sg[i].iov_base,
> > >>> +                                      elem->in_sg[i].iov_len);
> > >>> +        }
> > >>>      }
> > >>>
> > >>>  out:
> > >>>
> > >>
> > >> Yes, but:
> > >>
> > >> 1. Linux never used more than one
> > >> 2. QEMU never consumed more than one
>
> Yes, it works based on the fact that Linux only sends one hint each time.
>
> > >>
> > >> The spec states:
> > >>
> > >> "(b) The driver maps a series of pages and adds them to the
> > >> free_page_vq as individual scatter-gather input buffer entries."
> > >>
> > >> However, the spec was written by someone else (Alex) as the code was
> > >> (Wei). The code was there first.
> > >>
> > >> I don't particularly care what to adjust (code or spec). However, to
> > >> me it feels more like the spec is slightly wrong and it was intended
> > >> like the code is by the original code author.
> > >>
> > >> But then again, I don't particularly care :)
> > >
> > > Original QEMU side code had several bugs so, that's another one.
> > > Given nothing too bad happens if guest submits too many S/Gs, and
> > > given the spec also has a general chapter suggesting devices are
> > > flexible in accepting a single buffer split to multiple S/Gs, I'm
> > > inclined to accept the patch.

Yes, and it's probably too late to change the spec even if we want to change.

> >
> > Yeah, as I said, I don't particularly care. It's certainly an "easy change".
> >
> > Acked-by: David Hildenbrand <david@redhat.com>
> >

Thanks

>
> Don’t object the change.
> Just in case something unexpected, it would be better if someone could help 
> do a test.

I've tested the code with migration before sending the patches, I see
the hint works fine.

Thanks

>
> Thanks,
> Wei




reply via email to

[Prev in Thread] Current Thread [Next in Thread]