qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/3] v2 Fix Block Hotplug race with drive_unplug


From: Anthony Liguori
Subject: Re: [Qemu-devel] [PATCH 2/3] v2 Fix Block Hotplug race with drive_unplug()
Date: Fri, 29 Oct 2010 09:40:57 -0500
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.15) Gecko/20101027 Lightning/1.0b1 Thunderbird/3.0.10

On 10/29/2010 09:29 AM, Kevin Wolf wrote:
Am 29.10.2010 16:15, schrieb Anthony Liguori:
On 10/29/2010 09:01 AM, Markus Armbruster wrote:
Ryan Harper<address@hidden>   writes:
diff --git a/block.c b/block.c
index a19374d..be47655 100644
--- a/block.c
+++ b/block.c
@@ -1328,6 +1328,13 @@ void bdrv_set_removable(BlockDriverState *bs, int 
removable)
       }
   }

+void bdrv_unplug(BlockDriverState *bs)
+{
+    qemu_aio_flush();
+    bdrv_flush(bs);
+    bdrv_close(bs);
+}

Stupid question: why doesn't bdrv_close() flush automatically?

I don't think it's a bad idea to do that but to the extent that the
block API is designed after posix file I/O, close does not usually imply
flush.
I don't think it really resembles POSIX. More or less the only thing
they have in common is that both provide open, read, write and close,
which is something that probably any API for file accesses provides.

The operation you're talking about here is bdrv_flush/fsync that is not
implied by a POSIX close?

Yes. But I think for the purposes of this patch, a bdrv_cancel_all() would be just as good. The intention is to eliminate pending I/O requests, the fsync is just a side effect.

And why do we have to flush here, but not before other uses of
bdrv_close(), such as eject_device()?

Good question.  Kevin should also confirm, but looking at the code, I
think flush() is needed before close.  If there's a pending I/O event
and you close before the I/O event is completed, you'll get a callback
for completion against a bogus BlockDriverState.

I can't find anything in either raw-posix or the generic block layer
that would mitigate this.
I'm not aware of anything either. This is what qemu_aio_flush would do.

It seems reasonable to me to call both qemu_aio_flush and bdrv_flush in
bdrv_close. We probably don't really need to call bdrv_flush to operate
correctly, but it can't hurt and bdrv_close shouldn't happen that often
anyway.

I agree. Re: qemu_aio_flush, we have to wait for it to complete which gets a little complicated in bdrv_close(). I think it would be better to make bdrv_flush() call bdrv_aio_flush() if an explicit bdrv_flush method isn't provided. Something like the attached (still need to test).

Does that seem reasonable?

Regards,

Anthony Liguori

Kevin

Attachment: 0001-block-make-bdrv_flush-fall-back-to-bdrv_aio_flush.patch
Description: Text Data

Attachment: 0002-block-add-bdrv_flush-to-bdrv_close.patch
Description: Text Data


reply via email to

[Prev in Thread] Current Thread [Next in Thread]