[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [RFC PATCH 09/16] qcow2: Move COW and L2 update into ow
From: |
Paolo Bonzini |
Subject: |
Re: [Qemu-devel] [RFC PATCH 09/16] qcow2: Move COW and L2 update into own coroutine |
Date: |
Tue, 18 Sep 2012 16:59:50 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120828 Thunderbird/15.0 |
Il 18/09/2012 16:44, Kevin Wolf ha scritto:
>>> + qemu_co_mutex_unlock(&s->lock);
>>> >> + qemu_co_rwlock_rdlock(&s->l2meta_flush);
>> >
>> > Should this lock be taken in process_l2meta? It's a bit easier to follow.
> I'm pretty sure there was a reason, but it isn't obvious any more. I
> guess I should have put a comment there... Maybe it doesn't exist any
> more, or maybe it's not that obvious.
>
> The difference would be that while waiting for the lock, the original
> write request could complete instead of waiting as well, and that the
> lock is potentially taken only in a BH instead of immediately.
>
> What happens if bdrv_aio_flush() and bdrv_aio_writev() are both in
> flight? If the flush runs its stop_l2meta() after the write request has
> signalled completion, but before the COW coroutine has started, it gets
> the lock even though a COW must still be processed. I believe we could
> then return a successful flush when the metadata isn't really on disk yet.
Then you need two comments, because you also need to document that
process_l2meta expects the read lock to be taken.
> So if you agree, I think we need to leave it where it is.
We do, but there may be other alternatives:
1) Perhaps you can take the rdlock again in process_l2meta, and also add
an unlock here just after qemu_coroutine_enter?
2) Perhaps you can replace the rwlock with an explicit CoQueue, for
which an rwlock is just a wrapper. In thread terms, that would be a
condition variable, which I find easier to reason on than an rwlock. In
this case, the stop_l2meta would see that there is a pending write, and
yield. The bottom half then can pick up the work.
Do you actually need the writers side of the lock, since you're already
taking s->lock immediately after calling stop_l2meta? i.e. can
qcow2_{zero,discard}_clusters release the s->lock, or not? If not,
you're just using the write side to kick all readers, and that would be
a point in favor of (2).
Paolo
- [Qemu-devel] [RFC PATCH 07/16] qcow2: Factor out handle_dependencies(), (continued)
- [Qemu-devel] [RFC PATCH 07/16] qcow2: Factor out handle_dependencies(), Kevin Wolf, 2012/09/18
- [Qemu-devel] [RFC PATCH 06/16] qcow2: Enable dirty flag in qcow2_alloc_cluster_link_l2, Kevin Wolf, 2012/09/18
- [Qemu-devel] [RFC PATCH 05/16] qcow2: Allocate l2meta only for cluster allocations, Kevin Wolf, 2012/09/18
- [Qemu-devel] [RFC PATCH 12/16] qcow2: Handle dependencies earlier, Kevin Wolf, 2012/09/18
- [Qemu-devel] [RFC PATCH 14/16] qcow2: Execute run_dependent_requests() without lock, Kevin Wolf, 2012/09/18
- [Qemu-devel] [RFC PATCH 09/16] qcow2: Move COW and L2 update into own coroutine, Kevin Wolf, 2012/09/18
- [Qemu-devel] [RFC PATCH 13/16] qcow2: Change handle_dependency to byte granularity, Kevin Wolf, 2012/09/18
- [Qemu-devel] [RFC PATCH 11/16] qcow2: Add error handling to the l2meta coroutine, Kevin Wolf, 2012/09/18
- [Qemu-devel] [RFC PATCH 10/16] qcow2: Delay the COW, Kevin Wolf, 2012/09/18
- [Qemu-devel] [RFC PATCH 15/16] qcow2: Cancel COW when overwritten, Kevin Wolf, 2012/09/18