qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/2] v2 Fix Block Hotplug race with drive_unplug


From: Daniel P. Berrange
Subject: Re: [Qemu-devel] [PATCH 2/2] v2 Fix Block Hotplug race with drive_unplug()
Date: Fri, 22 Oct 2010 09:10:16 +0100
User-agent: Mutt/1.4.1i

On Thu, Oct 21, 2010 at 04:37:46PM -0500, Ryan Harper wrote:
> * Daniel P. Berrange <address@hidden> [2010-10-21 08:29]:
> > On Tue, Oct 19, 2010 at 09:32:29AM -0500, Ryan Harper wrote:
> > > Block hot unplug is racy since the guest is required to acknowlege the 
> > > ACPI
> > > unplug event; this may not happen synchronously with the device removal 
> > > command
> > > 
> > > This series aims to close a gap where by mgmt applications that assume the
> > > block resource has been removed without confirming that the guest has
> > > acknowledged the removal may re-assign the underlying device to a second 
> > > guest
> > > leading to data leakage.
> > > 
> > > This series introduces a new montor command to decouple asynchornous 
> > > device
> > > removal from restricting guest access to a block device.  We do this by 
> > > creating
> > > a new monitor command drive_unplug which maps to a bdrv_unplug() command 
> > > which
> > > does a qemu_aio_flush; bdrv_flush() and bdrv_close().  Once complete, 
> > > subsequent
> > > IO is rejected from the device and the guest will get IO errors but 
> > > continue to
> > > function.
> > > 
> > > A subsequent device removal command can be issued to remove the device, 
> > > to which
> > > the guest may or maynot respond, but as long as the unplugged bit is set, 
> > > no IO
> > > will be sumbitted.
> > 
> > The name 'drive_unplug' suggests to me that the drive object is
> > not being deleted/free()d ? Is that correct understanding, and if
> > so, what is responsible for finally free()ing the drive backend ?
> 
> It's technically the BlockDriverState Driver that we're closing.  To
> fully release the remaining resources, a device_del is required (which
> of course requires guest participation with the current
> interface).
> 
> Once QEMU issues the removal request, the guest responds and the piix4
> acpi handler for pciej_write writes invokes qdev_free() on the target
> device.  qdev_free() on the pci device will make it's way to the qdev
> exit handler registered for virtio-blk devices, virtio_blk_exit_pci().
> virtio_blk_exit_pci() marks the drive structure for deletion.  When qdev
> calls the properties handler, it invokes free_drive() on the disk and
> that calls blockdev_auto_del() which will do a bdrv_delete() which nukes
> the remaining objects (the acutal BlockDriverState).
> 
> I think I got the whole path in there.

Ok, thanks, that makes sense to me.

Sounds like we do still need a separate drive_del in the future to 
handle the different case of, drive_add, followed by a device_add
attempt which fails.

Regards,
Daniel
-- 
|: Red Hat, Engineering, London    -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org -o- http://virt-manager.org -o- http://deltacloud.org :|
|: http://autobuild.org        -o-         http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-   F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|



reply via email to

[Prev in Thread] Current Thread [Next in Thread]