qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/4] block: immediately cancel oversized read/wr


From: Peter Lieven
Subject: Re: [Qemu-devel] [PATCH 2/4] block: immediately cancel oversized read/write requests
Date: Mon, 08 Sep 2014 16:54:03 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.0

On 08.09.2014 16:42, Paolo Bonzini wrote:
Il 08/09/2014 16:35, Peter Lieven ha scritto:
messages. :)
So you would not throw an error msg here?
No, though a trace could be useful.
Is there a howto somewhere how to implement that?
Try commit 4ac4458076e1aaf3b01a6361154117df20e22215.

Thanks for the pointer.


Whats your opinion changed the max_xfer_len to 0xffff regardsless
of use_16_for_rw in iSCSI?
If you implemented request splitting in the block layer, it would be
okay to force max_xfer_len to 0xffff.

Unfortunately, I currently have no time for that. It will include some 
shuffling with
qiovs that has to be properly tested.

Regarding iSCSI: In fact currently the limit is 0xffff for all iSCSI Targets < 
2TB.
So I thought that its not obvious at all why a > 2TB target can handle bigger 
requests.

To the root cause of this patch multiwrite_merge I still have some thoughts:
 - why are we merging requests for raw (especially host devices and/or iSCSI?)
   The original patch from Kevin was to mitigate a QCOW2 performance regression.
   For iSCSI the qiov concats are destroying all the zerocopy efforts we made.
 - should we only merge requests within the same cluster?
 - why is there no multiread_merge?

Peter



reply via email to

[Prev in Thread] Current Thread [Next in Thread]