qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4 03/23] block: Make bdrv_round_to_clusters() s


From: Eric Blake
Subject: Re: [Qemu-devel] [PATCH v4 03/23] block: Make bdrv_round_to_clusters() signature more useful
Date: Thu, 28 Sep 2017 17:29:22 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0

On 09/26/2017 02:29 PM, John Snow wrote:
> 
> 
> On 09/26/2017 03:18 PM, Eric Blake wrote:
>> On 09/26/2017 01:51 PM, John Snow wrote:
>>>
>>>
>>> On 09/13/2017 12:03 PM, Eric Blake wrote:
>>>> In the process of converting sector-based interfaces to bytes,
>>>> I'm finding it easier to represent a byte count as a 64-bit
>>>> integer at the block layer (even if we are internally capped
>>>> by SIZE_MAX or even INT_MAX for individual transactions, it's
>>>> still nicer to not have to worry about truncation/overflow
>>>> issues on as many variables).  Update the signature of
>>>> bdrv_round_to_clusters() to uniformly use int64_t, matching
>>>> the signature already chosen for bdrv_is_allocated and the
>>>> fact that off_t is also a signed type, then adjust clients
>>>> according to the required fallout.
>>>>
>>>> Signed-off-by: Eric Blake <address@hidden>
>>>> Reviewed-by: Fam Zheng <address@hidden>
>>>>
>>
>>>> @@ -946,7 +946,7 @@ static int coroutine_fn 
>>>> bdrv_co_do_copy_on_readv(BdrvChild *child,
>>>>      struct iovec iov;
>>>>      QEMUIOVector bounce_qiov;
>>>>      int64_t cluster_offset;
>>>> -    unsigned int cluster_bytes;
>>>> +    int64_t cluster_bytes;
>>>>      size_t skip_bytes;
>>>>      int ret;
>>>>
>>>> @@ -967,6 +967,7 @@ static int coroutine_fn 
>>>> bdrv_co_do_copy_on_readv(BdrvChild *child,
>>>>      trace_bdrv_co_do_copy_on_readv(bs, offset, bytes,
>>>>                                     cluster_offset, cluster_bytes);
>>>>
>>>> +    assert(cluster_bytes < SIZE_MAX);
>>>
>>> later in this function, is there any real or imagined risk of
>>> cluster_bytes exceeding INT_MAX when it's passed to
>>> bdrv_co_do_pwrite_zeroes?
>>>
>>>>      iov.iov_len = cluster_bytes;
>>
>> cluster_bytes is the input 'unsigned int bytes' rounded out to cluster
> 
> Ah, yes, we're probably not going to exceed that, you're right.
> 
>> boundaries, but where we know 'bytes <= BDRV_REQUEST_MAX_BYTES' (which
>> is 2^31 - 511).  Still, I guess you are right that rounding to a cluster
>> size could produce a larger value of exactly 2^31 (bigger than INT_MAX,
>> but still fits in 32-bit unsigned int, so my assert was to make sure
>> that truncating 64 bits to size_t iov.iov_len still works on 32-bit
>> platforms).
>>
>> In theory, I don't think we ever attempt an unaligned operation near
>> 2^31 that would round up to INT_MAX overflow (if we can, that's a
>> pre-existing bug that should be fixed separately).
>>
>> Should I tighten the assertion to assert(cluster_bytes <=
>> BDRV_REQUEST_MAX_BYTES), then see if I can come up with a case where we
>> can violate that?
>>
> 
> *Only* if you think it's worth your time. You'd know better than me at
> this point if this is remotely possible or not. Just a simple width
> check that caught my eye.

I reproduced a test case - we have a pre-existing bug.  An update to
qemu-io coming up (I need to make it easy to turn on
BDRV_O_COPY_ON_READ); then a new iotests with my test case: create a
backing file with more than 2G of explicit 0, then open a brand new
wrapper qcow2 file and read 2G-512 bytes at offset 1024.  This will,
given default qcow2 cluster size of 64k, proceed to copy-on-write 2G+64k
of data; which fits fine in the pre-patch unsigned int or post-patch
int64_t, but becomes an unintended no-op in the bdrv_co_do_pwrite_zeroes.

Took me the better part of a day to figure out how to provoke it in a
way appropriate for iotests, but I'm grateful you gave me the challenge.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]