qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Using the one disk image file on 2 virtual machines at


From: Kashyap Chamarthy
Subject: Re: [Qemu-devel] Using the one disk image file on 2 virtual machines at the same time
Date: Wed, 29 Jul 2015 13:34:05 +0200
User-agent: Mutt/1.5.23.1-rc1 (2014-03-12)

On Wed, Jul 29, 2015 at 09:46:53AM +0100, Stefan Hajnoczi wrote:
> On Wed, Jul 29, 2015 at 12:57:30AM +0900, Manjong Han wrote:
> > I was facing a weird behavior when I used the one disk image file on 2
> > virtual machines at the same time.
> > 
> > I made the instance of a virtual machine, using the below command.
> > $ qemu-system-x86_64 -smp 2 -m 1024 -hda 10G.qcow2 -enable-kvm
> > 
> > When the OS(Ubuntu 14.04 64bit) was booted up, I made an another one, using
> > same command.
> > $ qemu-system-x86_64 -smp 2 -m 1024 -hda 10G.qcow2 -enable-kvm
> > 
> > Then, I had 2 virtual machines, using same disk image file.
> 
> This configuration is invalid.  It's similar to using the same physical
> disk or iSCSI LUN from two machines at the same time.
> 
> Standard file systems (ext4, xfs) and volume managers (LVM) are not
> cluster-aware by default.  They must only be accessed from one machine
> at a time.  Otherwise you risk data corruption.
> 
> You should probably use qcow2 backing files instead:
> 
>   10G.qcow2 <-- vm001.qcow2
>             ^-- vm002.qcow2
> 
> The command to create these files is:
> 
>   qemu-img create -f qcow2 -o backing_file=10G.qcow2 vm001.qcow2.
> 
> Both VMs share the data in 10G.qcow2.  All writes go to vm001.qcow2 or
> vm002.qcow2, respectively, so they don't corrupt each other.

As an addendum, when using a management library like libvirt, it
provides a convenient daemon called 'virtlockd'[1] (which uses the POSIX
fcntl(2) mechanism),  which takes care of invalid configs like above.

>From my notes (from a FOSDEM 2014 talk by Dan Berange), virtlockd
operates thus:

    - The QEMU driver inside libvirt daemon, just talks to the virtlockd
      daemon using an RPC mechanism. So, whenever you first start a
      guest, the first thing it does is it talks to the virtlockd daemon
      and acquire locks for all of these disk images -- only if this
      succeeds, will the QEMU process will be started

    - These locks are also released and reaquired whenever you paused
      the virtual machines -- which is the key to make migrations work.


[1] https://libvirt.org/locking-lockd.html

-- 
/kashyap



reply via email to

[Prev in Thread] Current Thread [Next in Thread]