qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/5] block: added lock image option and callback


From: Daniel P. Berrange
Subject: Re: [Qemu-devel] [PATCH 1/5] block: added lock image option and callback
Date: Wed, 13 Jan 2016 09:50:43 +0000
User-agent: Mutt/1.5.24 (2015-08-30)

On Wed, Jan 13, 2016 at 12:12:10PM +0300, Denis V. Lunev wrote:
> On 01/13/2016 11:52 AM, Markus Armbruster wrote:
> >Kevin Wolf <address@hidden> writes:
> >
> >>Am 11.01.2016 um 18:58 hat Daniel P. Berrange geschrieben:
> >>>On Mon, Jan 11, 2016 at 06:31:06PM +0100, Kevin Wolf wrote:
> >>>>Am 23.12.2015 um 08:46 hat Denis V. Lunev geschrieben:
> >>>>>From: Olga Krishtal <address@hidden>
> >>>>>
> >>>>>While opening the image we want to be sure that we are the
> >>>>>one who works with image, anf if it is not true -
> >>>>>opening the image for writing should fail.
> >>>>>
> >>>>>There are 2 ways at the moment: no lock at all and lock the file
> >>>>>image.
> >>>>>
> >>>>>Signed-off-by: Olga Krishtal <address@hidden>
> >>>>>Signed-off-by: Denis V. Lunev <address@hidden>
> >>>>>CC: Kevin Wolf <address@hidden>
> >>>>>CC: Max Reitz <address@hidden>
> >>>>>CC: Eric Blake <address@hidden>
> >>>>>CC: Fam Zheng <address@hidden>
> >>>>As long as locking is disabled by default, it's useless and won't
> >>>>prevent people from corrupting their images. These corruptions happen
> >>>>exactly because people don't know how to use qemu properly. You can't
> >>>>expect them to enable locking manually.
> >>>>
> >>>>Also, you probably need to consider bdrv_reopen() and live migration.
> >>>>I think live migration would be blocked if source and destination both
> >>>>see the lock; which is admittedly less likely than with the qcow2 patch
> >>>>(and generally a problem of this series), but with localhost migration
> >>>>and potentially with some NFS setups it can be the case.
> >>>Note that when libvirt does locking it will release locks when a VM
> >>>is paused, and acquire locks prior to resuming CPUs. This allows live
> >>>migration to work because you never have CPUs running on both src + dst
> >>>at the same time. This does mean that libvirt does not allow QEMU to
> >>>automatically re-start CPUs when migration completes, as it needs to
> >>>take some action to acquire locks before allowing the dst to start
> >>>running again.
> >>This assumes that block devices can only be written to if CPUs are
> >>running. In the days of qemu 0.9, this was probably right, but with
> >>things like block jobs and built-in NBD servers, I wouldn't be as sure
> >>these days.
> >Sounds like QEMU and libvirt should cooperate more closely to get the
> >locking less wrong.
> >
> >QEMU should have more accurate knowledge on how it is using the image.
> >Libvirt may be able to provide better locks, with the help of its
> >virtlockd daemon.
> daemon owning locks is a problem:
> - there are distributed cases
> - daemons restart from time to time

The virtlockd daemon copes with both of these cases just fine. There is
one daemon per virtualization host, and they can be configured acquire
locks in a way that they will be enforced across all hosts. The reason
we do it in a separate virtlockd daemon instead of libvirtd is that we
designed it to be able to re-exec() itself while maintaining all locks
to allow for seemless upgrade.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|



reply via email to

[Prev in Thread] Current Thread [Next in Thread]