qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-block] [PATCH v2 00/20] Drain fixes and cleanups,


From: Kevin Wolf
Subject: Re: [Qemu-devel] [Qemu-block] [PATCH v2 00/20] Drain fixes and cleanups, part 3
Date: Fri, 15 Jun 2018 18:08:13 +0200
User-agent: Mutt/1.9.1 (2017-09-22)

Am 11.06.2018 um 14:23 hat Kevin Wolf geschrieben:
> ping?
> 
> Am 29.05.2018 um 19:21 hat Kevin Wolf geschrieben:
> > This is the third and hopefully for now last part of my work to fix
> > drain. The main goal of this series is to make drain robust against
> > graph changes that happen in any callbacks of in-flight requests while
> > we drain a block node.
> > 
> > The individual patches describe the details, but the rough plan is to
> > change all three drain types (single node, subtree and all) to work like
> > this:
> > 
> > 1. First call all the necessary callbacks to quiesce external sources
> >    for new requests. This includes the block driver callbacks, the child
> >    node callbacks and disabling external AioContext events. This is done
> >    recursively.
> > 
> >    Much of the trouble we had with drain resulted from the fact that the
> >    graph changed while we were traversing the graph recursively. None of
> >    the callbacks called in this phase may change the graph.
> > 
> > 2. Then do a single AIO_WAIT_WHILE() to drain the requests of all
> >    affected nodes. The aio_poll() called by it is where graph changes
> >    can happen and we need to be careful.
> > 
> >    However, while evaluating the loop condition, the graph can't change,
> >    so we can safely call all necessary callbacks, if needed recursively,
> >    to determine whether there are still pending requests in any affected
> >    nodes. We just need to make sure that we don't rely on the set of
> >    nodes being the same between any two evaluation of the condition.
> > 
> > There are a few more smaller, mostly self-contained changes needed
> > before we're actually safe, but this is the main mechanism that will
> > help you understand what we're working towards during the series.

Without objection, applied to the block branch.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]