qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] block-raw: Make cache=off default again


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PATCH] block-raw: Make cache=off default again
Date: Thu, 25 Jun 2009 09:31:44 +0200
User-agent: Thunderbird 2.0.0.21 (X11/20090320)

Jamie Lokier schrieb:
> Kevin Wolf wrote:
>> Jamie Lokier schrieb:
>>> Kevin Wolf wrote:
>>>> What happens with virtio I still need to understand. Obviously, as soon
>>>> as virtio decides to fall back to 4k requests, performance becomes
>>>> terrible.
>>> Does emulating a disk with 4k sector size instead of 512 bytes help this?
>> I just changed the virtio_blk code to always do the
>> blk_queue_hardsect_size with 4096, didn't change the behaviour.
> 
> You need quite a bit more than that to emulate a 4k sector size disk.
> There's the ATA/SCSI ID pages to update, and the special 512-bit
> offset tricky thing.

Okay, then I'll just admit that I know too little about the Linux block
layer. I'll gladly try any patch (and hopefully understand it then), but
for doing it myself I think it would take me a lot of time experimenting
and still not knowing if I'm doing it right.

>> I'm not sure if I have mentioned it in this thread: We have found that
>> it helps to use the deadline elevator instead of cfq in either the host
>> or the guest. I would accept this if it would only help when it's
>> changed in the guest (after all, I don't know the Linux block layer very
>> well), but I certainly don't understand how the host elevator could
>> change the guest request sizes - and noone else on the internal mailing
>> lists had an explanation either.
> 
> The host elevator will certainly affect the timing of I/O requests,
> which it receives from the guest, and it will also affect how requests
> are merged to make larger requests.
> 
> So it's not surprising that the host elevator changes the sizes of
> request sizes when they reach the host disk.
> 
> It shouldn't change the size of requests inside the guest, _before_
> they reach the host.

Yeah, this is exactly what I was thinking, too. However, in reality it
_does_ influence the guest request sizes, for whatever reason (maybe
again something timing related?). I put debug code into both the qemu
virtio-blk implementation and the guest kernel module and they both see
lots of 4k requests when the host uses cfg and much larger requests when
it uses deadline.

Kevin




reply via email to

[Prev in Thread] Current Thread [Next in Thread]