qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/3] v4 Decouple block device removal from devic


From: Markus Armbruster
Subject: Re: [Qemu-devel] [PATCH 0/3] v4 Decouple block device removal from device removal
Date: Sat, 06 Nov 2010 09:18:43 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/23.1 (gnu/linux)

Ryan Harper <address@hidden> writes:

> * Markus Armbruster <address@hidden> [2010-11-05 11:11]:
>> Ryan Harper <address@hidden> writes:
>> 
>> > * Markus Armbruster <address@hidden> [2010-11-05 08:28]:
>> >> I'd be fine with any of these:
>> >> 
>> >> 1. A new command "device_disconnet ID" (or similar name) to disconnect
>> >>    device ID from any host parts.  Nice touch: you don't have to know
>> >>    about the device's host part(s) to disconnect it.  But it might be
>> >>    more work than the other two.
>> >
>> > This is sort of what netdev_del() and drive_unplug() are today; we're
>> > just saying sever the connection of this device id.   
>> 
>> No, I have netdev_del as (3).
>> 
>> All three options are "sort of" the same, just different commands with
>> a common purpose.
>> 
>> > I'd like to rename drive_unplug() to blockdev_del() and call it done.  I
>> > was looking at libvirt and the right call to netdev_del is already
>> > in-place; I'd just need to re-spin my block patch to call blockdev_del()
>> > after invoking device_del() to match what is done for net.
>> 
>> Unless I'm missing something, you can't just rename: your unplug does
>> not delete the host part.
>> 
>> >> 2. New commands netdev_disconnect, drive_disconnect (or similar names)
>> >>    to disconnect a host part from a guest device.  Like (1), except you
>> >>    have to point to the other end of the connection to cut it.
>> >
>> > What's the advantage here? We need an additional piece of info (host
>> > part) in addition to the device id?
>> 
>> That's a disadvantage.
>> 
>> Possible advantage: implementation could be slightly easier than (1),
>> because you don't have to find the host parts.
>> 
>> >> 3. A new command "drive_del ID" similar to existing netdev_del.  This is
>> >>    (2) fused with delete.  Conceptual wart: you can't disconnect and
>> >>    keep the host part around.  Moreover, delete is slightly dangerous,
>> >>    because it renders any guest device still using the host part
>> >>    useless.
>> >
>> > Hrm, I thought that's what (1) is.
>> 
>> No.
>> 
>> With (1), the argument is a *device* ID, and we disconnect *all* host
>> parts connected to this device (typically just one).
>> 
>> With (3), the argument is a netdev/drive ID, and disconnect *this* host
>> part from the peer device.
>> 
>> >                                     Well, either (1) or (3); I'd like to
>> > rename drive_unplug() to blockdev_del() since they're similar function
>> > w.r.t removing access to the host resource.  And we can invoke them in
>> > the same way from libvirt (after doing guest notification, remove
>> > access).
>> 
>> I'd call it drive_del for now, to match drive_add.
>
> OK, drive_del() and as you mentioned, drive_unplug will take out the
> block driver, but doesn't remove the dinfo object; that ends up dying
> when we call the device destructor.  I think for symmetry we'll want
> drive_del to remove the dinfo object as well.

Exactly.

a. bdrv_detach() to zap the pointer from bdrv to qdev
b. zap the pointer from qdev to bdrv
c. drive_uninit() to dispose of the host part

Step b could be awkward with (3), because you don't know device details.
I guess you have to search device properties for a drive property
pointing to bdrv.  I like (1) because it puts that loop in the one place
where it belongs: qdev core.  (3) duplicates it in every HOSTDEV_del.
Except for netdev_del, which is special because of VLANs.

To avoid step b, you could try to keep the bdrv around in a special
zombie state.  Still have to free the dinfo, but can't use
drive_uninit() for that then.

If you think I'm overcomplicating this, feel free to prove me wrong with
working code :)



reply via email to

[Prev in Thread] Current Thread [Next in Thread]