qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [kvm-devel] [Qemu-devel] Re: [PATCH 1/3] Refactor AIO interface to a


From: Jamie Lokier
Subject: Re: [kvm-devel] [Qemu-devel] Re: [PATCH 1/3] Refactor AIO interface to allow other AIO implementations
Date: Mon, 21 Apr 2008 00:39:13 +0100
User-agent: Mutt/1.5.13 (2006-08-11)

Avi Kivity wrote:
> >Does that mean "for the majority of deployments, the slow version is
> >sufficient.  The few that care about performance can use Linux AIO?"
>
> In essence, yes. s/slow/slower/ and s/performance/ultimate block device 
> performance/.
> 
> Many deployments don't care at all about block device performance; they 
> care mostly about networking performance.

That's interesting.  I'd have expected block device performance to be
important for most things, for the same reason that disk performance
is (well, reasonably) important for non-virtual machines.

But as you say next:

> >I'm under the impression that the entire and only point of Linux AIO
> >is that it's faster than POSIX AIO on Linux.
> 
> It is.  I estimate posix aio adds a few microseconds above linux aio per 
> I/O request, when using O_DIRECT.  Assuming 10 microseconds, you will 
> need 10,000 I/O requests per second per vcpu to have a 10% performance 
> difference.  That's definitely rare.

Oh, I didn't realise the difference was so small.

At such a tiny difference, I'm wondering why Linux-AIO exists at all,
as it complicates the kernel rather a lot.  I can see the theoretical
appeal, but if performance is so marginal, I'm surprised it's in
there.

I'm also surprised the Glibc implementation of AIO using ordinary
threads is so close to it.  And then, I'm wondering why use AIO it
all: it suggests QEMU would run about as fast doing synchronous I/O in
a few dedicated I/O threads.

> >Does that mean "a managed environment can have some code which check
> >the host kernel version + filesystem type holding the VM image, to
> >conditionally enable Linux AIO?"  (Since if you care about
> >performance, which is the sole reason for using Linux AIO, you
> >wouldn't want to enable Linux AIO on any host in your cluster where it
> >will trash performance.)
> 
> Either that, or mandate that all hosts use a filesystem and kernel which 
> provide the necessary performance.  Take ovirt for example, which 
> provides the entire hypervisor environment, and so can guarantee this.
> 
> Also, I'd presume that those that need 10K IOPS and above will not place 
> their high throughput images on a filesystem; rather on a separate SAN LUN.

Does the separate LUN make any difference?  I thought O_DIRECT on a
filesystem was meant to be pretty close to block device performance.
I base this on messages here and there which say swapping to a file is
about as fast as swapping to a block device, nowadays.

Thanks for your useful remarks, btw.  There doesn't seem to be a lot
of good info about Linux-AIO around.

-- Jamie




reply via email to

[Prev in Thread] Current Thread [Next in Thread]