qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] xen/9pfs: yield when there isn't enough room on the ring


From: Christian Schoenebeck
Subject: Re: [PATCH 2/2] xen/9pfs: yield when there isn't enough room on the ring
Date: Wed, 20 May 2020 13:23:18 +0200

On Mittwoch, 20. Mai 2020 03:47:12 CEST Stefano Stabellini wrote:
> From: Stefano Stabellini <address@hidden>
> 
> Instead of truncating replies, which is problematic, wait until the
> client reads more data and frees bytes on the reply ring.
> 
> Do that by calling qemu_coroutine_yield(). The corresponding
> qemu_coroutine_enter_if_inactive() is called from xen_9pfs_bh upon
> receiving the next notification from the client.
> 
> We need to be careful to avoid races in case xen_9pfs_bh and the
> coroutine are both active at the same time. In xen_9pfs_bh, wait until
> either the critical section is over (ring->co == NULL) or until the
> coroutine becomes inactive (qemu_coroutine_yield() was called) before
> continuing. Then, simply wake up the coroutine if it is inactive.
> 
> Signed-off-by: Stefano Stabellini <address@hidden>
> ---

In general this patch makes sense to me, and much better and cleaner solution 
than what we discussed before. Just one detail ...

>  hw/9pfs/xen-9p-backend.c | 28 ++++++++++++++++++++++------
>  1 file changed, 22 insertions(+), 6 deletions(-)
> 
> diff --git a/hw/9pfs/xen-9p-backend.c b/hw/9pfs/xen-9p-backend.c
> index fc197f6c8a..3939539028 100644
> --- a/hw/9pfs/xen-9p-backend.c
> +++ b/hw/9pfs/xen-9p-backend.c
> @@ -37,6 +37,7 @@ typedef struct Xen9pfsRing {
> 
>      struct iovec *sg;
>      QEMUBH *bh;
> +    Coroutine *co;
> 
>      /* local copies, so that we can read/write PDU data directly from
>       * the ring */
> @@ -198,16 +199,18 @@ static void xen_9pfs_init_in_iov_from_pdu(V9fsPDU
> *pdu, g_free(ring->sg);
> 
>      ring->sg = g_new0(struct iovec, 2);
> -    xen_9pfs_in_sg(ring, ring->sg, &num, pdu->idx, size);
> +    ring->co = qemu_coroutine_self();
> +    smp_wmb();
> 
> +again:
> +    xen_9pfs_in_sg(ring, ring->sg, &num, pdu->idx, size);
>      buf_size = iov_size(ring->sg, num);
>      if (buf_size  < size) {
> -        xen_pv_printf(&xen_9pfs->xendev, 0, "Xen 9pfs request type %d"
> -                "needs %zu bytes, buffer has %zu\n", pdu->id, size,
> -                buf_size);
> -        xen_be_set_state(&xen_9pfs->xendev, XenbusStateClosing);
> -        xen_9pfs_disconnect(&xen_9pfs->xendev);
> +        qemu_coroutine_yield();
> +        goto again;
>      }
> +    ring->co = NULL;
> +    smp_wmb();
> 
>      *piov = ring->sg;
>      *pniov = num;
> @@ -292,6 +295,19 @@ static int xen_9pfs_receive(Xen9pfsRing *ring)
>  static void xen_9pfs_bh(void *opaque)
>  {
>      Xen9pfsRing *ring = opaque;
> +    bool wait;
> +
> +again:
> +    wait = ring->co != NULL && qemu_coroutine_entered(ring->co);
> +    smp_rmb();
> +    if (wait) {
> +        cpu_relax();
> +        goto again;
> +    }
> +
> +    if (ring->co != NULL) {
> +        qemu_coroutine_enter_if_inactive(ring->co);

... correct me if I am wrong, but AFAIK qemu_coroutine_enter_if_inactive() 
will simply run the coroutine directly on caller's thread, it will not 
dispatch the coroutine onto the thread which yielded the coroutine before.

> +    }
>      xen_9pfs_receive(ring);
>  }

AFAICS you have not addressed the problem msize >> xen ringbuffer size, in 
which case I would expect the Xen driver to loop forever. Am I missing 
something or have you postponed addressing this?

Best regards,
Christian Schoenebeck





reply via email to

[Prev in Thread] Current Thread [Next in Thread]