[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH 4/4] block: Cater to iscsi with non-power-of-2 d

From: Paolo Bonzini
Subject: Re: [Qemu-block] [PATCH 4/4] block: Cater to iscsi with non-power-of-2 discard
Date: Tue, 25 Oct 2016 14:19:50 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0

On 25/10/2016 14:12, Peter Lieven wrote:
> Am 25.10.2016 um 14:09 schrieb Paolo Bonzini:
>> On 25/10/2016 14:03, Peter Lieven wrote:
>>> Am 01.08.2016 um 11:22 schrieb Paolo Bonzini:
>>>> On 28/07/2016 04:39, Eric Blake wrote:
>>>>> On 07/27/2016 01:25 AM, Fam Zheng wrote:
>>>>>> On Thu, 07/21 13:34, Eric Blake wrote:
>>>>>>> +    max_write_zeroes = max_write_zeroes / alignment * alignment;
>>>>>> Not using QEMU_ALIGN_DOWN despite patch 3?
>>>>> Looks like I missed that on the rebase. Can fix if there is a
>>>>> reason for
>>>>> a respin.
>>>> Since Stefan acked this, I'm applying the patch and fixing it to use
>>>> Paolo
>>> Hi,
>>> I came across a sort of regression we introduced with the dropping of
>>> head and tail
>>> of an unaligned discard.
>>> The discard alignment that we use to trim the discard request is just a
>>> hint.
>>> I learned on the equallogics that a page (which is this unusal 15MB
>>> large) is
>>> unallocated even if the discard happens in pieces. E.g. in slices of 1MB
>>> requests.
>>>  From my point of view I would like to restore the old behaviour.
>>> What do
>>> you think?
>> The right logic should be the one in Linux: if splitting a request, and
>> the next starting sector would be misaligned, stop the discard at the
>> previous aligned sector.  Otherwise leave everything alone.
> Just to clarify. I mean the guest would send incremental 1MB discards
> we would now drop all of them if the alignment is 15MB. Previously,
> we have sent all of the 1MB requests.

Yes.  In this case there would be no need at all to split the request,
so each request should be passed through.

But hey, that firmware is seriously weird. :)


reply via email to

[Prev in Thread] Current Thread [Next in Thread]