On Mon, Aug 31, 2009 at 05:53:23PM -0500, Anthony Liguori wrote:
I think we should pity our poor users and avoid adding yet another
obscure option that is likely to be misunderstood.
Can someone do some benchmarking with cache=writeback and fdatasync
first and quantify what the real performance impact is?
Some preliminary numbers because they are very interesting. Note that
his is on a raid controller, not cheap ide disks. To make up for that
I used an image file on ext3, which due to it's horrible fsync
performance should be kind of a worst case. All these patches are
with Linux 2.6.31-rc8 + my various barrier fixes on guest and host,
using ext3 with barrier=1 on both.
between 9m38s and 9m39s
(given that I've only done three runs each this might fall under
the boundary for measurement tolerances).
For comparism the raw block device nodes with cache=none (just one run)
is 9m36.759s, which is not far apart. A completely native run is
7m39.326, btw - and I fear much of the slowdown in KVM isn't I/O
related.