qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] Disk integrity in QEMU


From: Anthony Liguori
Subject: Re: [Qemu-devel] [RFC] Disk integrity in QEMU
Date: Sun, 12 Oct 2008 22:16:57 -0500
User-agent: Thunderbird 2.0.0.17 (X11/20080925)

Mark Wagner wrote:
So, if your proposed default value for the cache is in effect, then O_DSYNC should provide the write-thru required by the guests use of O_DIRECT on the
writes.  However, if the default cache value is not used and its set to
cache=on, and if the guest is using O_DIRECT or O_DSYNC, I feel there are

The option would be cache=writeback and the man pages have a pretty clear warning in it that it could lead to data loss.

It's used for -snapshot and it's totally safe for that (and improves write performance for that case). It's also there because a number of people expressed a concern that they did not care about data integrity and wished to be able to get the performance boost. I don't see a harm in that since I think we'll now have adequate documentation.

Regards,

Anthony Liguori


issues that need to be addressed.

-mark

If QEMU
had a similar design to Enterprise Storage with redundancy, battery backup, etc, I'd be fine with it, but you don't. QEMU is a layer that I've also thought was suppose to be small, lightweight and unobtrusive that is silently putting everyones data
at risk.

The low-end iSCSI server from EqualLogic claims:
    "it combines intelligence and automation with fault tolerance"
"Dual, redundant controllers with a total of 4 GB battery-backed memory"

AFAIK QEMU provides neither of these characteristics.

So if this is your only concern, we're in violent agreement. You were previously arguing that we should use O_DIRECT in the host if we're not "lying" about write completions anymore. That's what I'm opposing because the details of whether we use O_DIRECT or not have absolutely nothing to do with data integrity as long as we're using O_DSYNC.

Regards,

Anthony Liguori


-mark

The fact that the virtualization layer has a cache is really not that unusual.
Do other virtualization layers lie to the guest and indicate that the data has successfully been ACK'd by the storage subsystem when the data is actually
still in the host cache?


-mark

Regards,

Anthony Liguori
















reply via email to

[Prev in Thread] Current Thread [Next in Thread]