qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v2 6/6] linux-aio: Queue requests instead of


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [RFC PATCH v2 6/6] linux-aio: Queue requests instead of returning -EAGAIN
Date: Fri, 05 Dec 2014 19:15:38 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.2.0


On 05/12/2014 17:06, Kevin Wolf wrote:
> If the queue array for io_submit() is already full, but a new request
> arrives, we cannot add it to that queue anymore. We can, however, use a
> CoQueue, which is implemented as a list and can therefore queue as many
> requests as we want.
> 
> Signed-off-by: Kevin Wolf <address@hidden>
> ---
>  block/linux-aio.c | 31 ++++++++++++++++++++++++++-----
>  1 file changed, 26 insertions(+), 5 deletions(-)
> 
> diff --git a/block/linux-aio.c b/block/linux-aio.c
> index 373ec4b..8e6328b 100644
> --- a/block/linux-aio.c
> +++ b/block/linux-aio.c
> @@ -44,6 +44,7 @@ typedef struct {
>      int plugged;
>      unsigned int size;
>      unsigned int idx;
> +    CoQueue waiting;
>  } LaioQueue;
>  
>  struct qemu_laio_state {
> @@ -160,6 +161,8 @@ static void ioq_init(LaioQueue *io_q)
>      io_q->size = MAX_QUEUED_IO;
>      io_q->idx = 0;
>      io_q->plugged = 0;
> +
> +    qemu_co_queue_init(&io_q->waiting);
>  }
>  
>  static int ioq_submit(struct qemu_laio_state *s)
> @@ -201,15 +204,29 @@ static int ioq_submit(struct qemu_laio_state *s)
>                  s->io_q.idx * sizeof(s->io_q.iocbs[0]));
>      }
>  
> +    /* Now there should be room for some more requests */
> +    if (!qemu_co_queue_empty(&s->io_q.waiting)) {
> +        if (qemu_in_coroutine()) {
> +            qemu_co_queue_next(&s->io_q.waiting);
> +        } else {
> +            qemu_co_enter_next(&s->io_q.waiting);

We should get better performance by wrapping these with
plug/unplug.  Trivial for the qemu_co_enter_next case, much less for
qemu_co_queue_next...

This exposes what I think is the main wrinkle in these patches: I'm not
sure linux-aio is a great match for the coroutine architecture.  You
introduce some infrastructure duplication with block.c to track
coroutines, and I don't find the coroutine code to be an improvement
over Ming Lei's asynchronous one---in fact I actually find it more
complicated.

Paolo

> +        }
> +    }
> +
>      return ret;
>  }
>  
>  static int ioq_enqueue(struct qemu_laio_state *s, struct iocb *iocb)
>  {
>      unsigned int idx = s->io_q.idx;
> +    bool was_waiting = false;
> +    int ret = 0;
>  
> -    if (unlikely(idx == s->io_q.size)) {
> -        return -EAGAIN;
> +    while (unlikely(idx == s->io_q.size)) {
> +        /* Wait for iocb slots to become free */
> +        qemu_co_queue_wait(&s->io_q.waiting);
> +        was_waiting = true;
>      }
>  
>      s->io_q.iocbs[idx++] = iocb;
> @@ -217,10 +234,14 @@ static int ioq_enqueue(struct qemu_laio_state *s, 
> struct iocb *iocb)
>  
>      /* submit immediately if queue is not plugged or full */
>      if (!s->io_q.plugged || idx == s->io_q.size) {
> -        return ioq_submit(s);
> -    } else {
> -        return 0;
> +        ret = ioq_submit(s);
> +    } else if (was_waiting) {
> +        /* We were just woken up and there's stil room in the queue. Wake up
> +         * someone else, too. */
> +        qemu_co_queue_next(&s->io_q.waiting);
>      }
> +
> +    return ret;
>  }
>  
>  void laio_io_plug(BlockDriverState *bs, void *aio_ctx)
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]