qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] qcow2 - safe on kill? safe on power fail?


From: Anthony Liguori
Subject: Re: [Qemu-devel] qcow2 - safe on kill? safe on power fail?
Date: Mon, 21 Jul 2008 17:14:50 -0500
User-agent: Thunderbird 2.0.0.14 (X11/20080501)

Jamie Lokier wrote:
If the sector hasn't been previously allocated, then a new sector in the file needs to be allocated. This is going to change metadata within the QCOW2 file and this is where it is possible to corrupt a disk image. The operation of allocating a new disk sector is completely synchronous so no other code runs until this completes. Once the disk sector is allocated, you're safe again[1].

My main concern is corruption of the QCOW2 sector allocation map, and
subsequently QEMU/KVM breaking or going wildly haywire with that file.

With a normal filesystem, sure, there are lots of ways to get
corruption when certain events happen.  But you don't lose the _whole_
filesystem.

Sure you can. If you don't have a battery backed disk cache and are using write-back (which is usually the default), you can definitely get corruption of the journal. Likewise, under the right scenarios, you will get journal corruption with the default mount options of ext3 because it doesn't use barriers.

This is very hard to see happen in practice though because these windows are very small--just like with QEMU.

My concern is that if the QCOW2 sector allocation map is corrupted by
these events, you may lose the _whole_ virtual machine, which can be a
pretty big loss.

Is the format robust enough to prevent that from being a problem?

It could be extended to contain a journal. But that doesn't guarantee that you won't lose data because of your file system failing, that's the point I'm making.

(Backups help (but not good enough for things like a mail or database
server).  But how do you safely backup the image of a VM that is
running 24x7?  LVM snapshots are the only way I've thought of, and
they have a barrier problem, see below.)

you have a file system that supports barriers and barriers are enabled by default (they aren't enabled by default with ext2/3)

There was recent talk of enabling them by default for ext3.

It's not going to happen.

you are running QEMU with cache=off to disable host write caching.

Doesn't that use O_DIRECT?  O_DIRECT writes don't use barriers, and
fsync() does not deterministically issue a disk barrier if there's no
metadata change, so O_DIRECT writes are _less_ safe with disks which
have write-cache enabled than using normal writes.

It depends on the filesystem. ext3 never issues any barriers by default :-)

I would think a good filesystem would issue a barrier after an O_DIRECT write.

What about using a partition, such as an LVM volume (so it can be
snapshotted without having to take down the VM)?  I'm under the
impression there is no way to issue disk barrier flushes to a
partition, so that's screwed too.  (Besides, LVM doesn't propagate
barrier requests from filesystems either...)

Unfortunately there is no userspace API to inject barriers in a disk. fdatasync() maybe but that's not the same behavior as a barrier? I don't think IDE supports barriers at all FWIW. It only has a write-back and write-through mode so if you care about data, you would have to enable write-through in your guest.

The last two paragraphs apply when using _any_ file format and break
the integrity of guest journalling filesystems, not just qcow2.

Since no other code runs during this period, bugs in the device emulation, a user closing the SDL window, and issuing quit in the monitor, will not corrupt the disk image. Your guest may require an fsck but the QCOW2 image will be fine.

Does this apply to KVM as well?  I thought KVM had a separate threads
for I/O, so problems in another subsystem might crash an I/O thread in
mid action.  Is that work in progress?

Not really. There is a big lock that prevents two threads from every running at the same time within QEMU.

Regards,

Anthony Liguori

Thanks again,
-- Jamie







reply via email to

[Prev in Thread] Current Thread [Next in Thread]