qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/2 v2] Direct IDE I/O


From: Gerd Hoffmann
Subject: Re: [Qemu-devel] [PATCH 2/2 v2] Direct IDE I/O
Date: Tue, 04 Dec 2007 14:21:34 +0100
User-agent: Thunderbird 2.0.0.9 (X11/20071115)

Anthony Liguori wrote:
>> IMHO it would be a much better idea to kill the aio interface altogether
>> and instead make the block drivers reentrant.  Then you can use
>> (multiple) posix threads to run the I/O async if you want.
> 
> Threads are a poor substitute for a proper AIO interface.  linux-aio
> gives you everything you could possibly want in an interface since it
> allows you to submit multiple vectored operations in a single syscall,
> use an fd to signal request completion, complete multiple requests in a
> single syscall, and inject barriers via fdsync.

I still think implementing async i/o at block driver level is the wrong
thing to do.  You'll end up reinventing the wheel over and over again
and add complexity to the block drivers which simply doesn't belong
there (or not supporting async I/O for most file formats).  Just look at
the insane file size of the block driver for the simplest possible disk
format: block-raw.c.  It will become even worse when adding a
linux-specific aio variant.

In contrast:  Making the disk drivers reentrant should be easy for most
of them.  For the raw driver it should be just using pread/pwrite
syscalls instead of lseek + read/write (also saves a syscall along the
way, yea!).  Others probably need an additional lock for metadata
updates.  With that in place you can easily implement async I/O via
threads one layer above, and only once, in block.c.

IMHO the only alternative to that scheme would be to turn the block
drivers in some kind of remapping drivers for the various file formats
which don't actually perform the I/O.  Then you can handle the actual
I/O in a generic way using whatever API is available, be it posix-aio,
linux-aio or slow-sync-io.

cheers,
  Gerd




reply via email to

[Prev in Thread] Current Thread [Next in Thread]