qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH] util/hbitmap: fix unaligned reset


From: John Snow
Subject: Re: [Qemu-block] [PATCH] util/hbitmap: fix unaligned reset
Date: Mon, 5 Aug 2019 16:03:05 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0


On 8/5/19 5:48 AM, Vladimir Sementsov-Ogievskiy wrote:
> 05.08.2019 12:26, Vladimir Sementsov-Ogievskiy wrote:
>> 02.08.2019 22:21, John Snow wrote:
>>>
>>>
>>> On 8/2/19 2:58 PM, Vladimir Sementsov-Ogievskiy wrote:
>>>> hbitmap_reset is broken: it rounds up the requested region. It leads to
>>>> the following bug, which is shown by fixed test:
>>>>
>>>> assume granularity = 2
>>>> set(0, 3) # count becomes 4
>>>> reset(0, 1) # count becomes 2
>>>>
>>>> But user of the interface assume that virtual bit 1 should be still
>>>> dirty, so hbitmap should report count to be 4!
>>>>
>>>> In other words, because of granularity, when we set one "virtual" bit,
>>>> yes, we make all "virtual" bits in same chunk to be dirty. But this
>>>> should not be so for reset.
>>>>
>>>> Fix this, aligning bound correctly.
>>>>
>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
>>>> ---
>>>>
>>>> Hi all!
>>>>
>>>> Hmm, is it a bug or feature? :)
>>>
>>> Very, very good question.
>>>
>>>> I don't have a test for mirror yet, but I think that sync mirror may be 
>>>> broken
>>>> because of this, as do_sync_target_write() seems to be using unaligned 
>>>> reset.
>>>>
>>>
>>> Honestly I was worried about this -- if you take a look at my patches
>>> where I add new bitmap sync modes, I bent over backwards to align
>>> requests for the sync=top bitmap initialization methods because I was
>>> worried about this possibly being the case.
>>>
>>>
>>> I'm not sure what the "right" behavior ought to be.
>>>
>>> Let's say you have a granularity of 8 bytes:
>>>
>>> if you reset 0-3 in one call, and then 4-7 in the next, what happens? If
>>> the caller naively thinks there's a 1:1 relationship, it might actually
>>> expect that to reflect a cleared bit. With alignment protection, we'll
>>> just fail to clear it both times and it remains set.
>>>
>>> On the other hand, if you do allow partial clears, the first reset for
>>> 0-3 will toggle off 4-7 too, where we might rely on the fact that it's
>>> actually still dirty.
>>>
>>> Whether or not that's dangerous depends on the context, and only the
>>> caller knows the context. I think we need to make the semantic effect of
>>> the reset "obvious" to the caller.
>>>
>>>
>>> I envision this:
>>>
>>> - hbitmap_reset(bitmap, start, length)
>>>    returns -EINVAL if the range is not properly aligned
>>
>> hbitmap_reset don't return, I thinks it should be an assertion
> 
> don't return any value
> 

Works for me.

>>
>>>
>>> - hbitmap_reset_flags(bitmap, flags, start, length)
>>>    if (flags & HBITMAP_ALIGN_DOWN) align request to only full bits
>>>    if (flags & HBITMAP_ALIGN_UP) align request to cover any bit even
>>> partially touched by the specified range
>>>    otherwise, pass range through as-is to hbitmap_reset (and possibly get
>>> -EINVAL if caller did not align the request.)
>>>
>>>
>>> That way the semantics are always clear to the caller.
>>
>> Hmm, I doubt, is there any use of ALIGN_UP? In most cases it's safe to thing 
>> that
>> something clear is dirty (and this is how hbitmap actually works on 
>> set/get), but
>> it seems always unsafe to ALIGN_UP reset..
>>
>> So, I think that it should be default to ALIGN_DOWN, or just an assertion 
>> that request
>> is aligned (which anyway leads to implementing a helper 
>> hbitmap_reset_align_up)..
> 
> hbitmap_reset_align_down I mean.
> 
There might not be one at the moment -- it's just the existing behavior
so I catered to it. I'd definitely just omit it if no callers need that
semantic.

So we'd have a "strict aligned" mode and a "clamped down" mode, which
probably gives us what we need in all current cases.

(Still catching up on all of today's emails, though.)

--js



reply via email to

[Prev in Thread] Current Thread [Next in Thread]