qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 05/10] block/dirty-bitmap: switch _next_dirty_area and _ne


From: Max Reitz
Subject: Re: [PATCH v3 05/10] block/dirty-bitmap: switch _next_dirty_area and _next_zero to int64_t
Date: Mon, 20 Jan 2020 12:59:47 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.3.1

On 19.12.19 11:03, Vladimir Sementsov-Ogievskiy wrote:
> We are going to introduce bdrv_dirty_bitmap_next_dirty so that same
> variable may be used to store its return value and to be its parameter,
> so it would int64_t.
> 
> Similarly, we are going to refactor hbitmap_next_dirty_area to use
> hbitmap_next_dirty together with hbitmap_next_zero, therefore we want
> hbitmap_next_zero parameter type to be int64_t too.
> 
> So, for convenience update all parameters of *_next_zero and
> *_next_dirty_area to be int64_t.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
> ---
>  include/block/dirty-bitmap.h |  6 +++---
>  include/qemu/hbitmap.h       |  7 +++----
>  block/dirty-bitmap.c         |  6 +++---
>  nbd/server.c                 |  2 +-
>  tests/test-hbitmap.c         | 32 ++++++++++++++++----------------
>  util/hbitmap.c               | 13 ++++++++-----
>  6 files changed, 34 insertions(+), 32 deletions(-)

[...]

> diff --git a/util/hbitmap.c b/util/hbitmap.c
> index b6d4b99a06..df22f06be6 100644
> --- a/util/hbitmap.c
> +++ b/util/hbitmap.c
> @@ -193,7 +193,7 @@ void hbitmap_iter_init(HBitmapIter *hbi, const HBitmap 
> *hb, uint64_t first)
>      }
>  }
>  
> -int64_t hbitmap_next_zero(const HBitmap *hb, uint64_t start, uint64_t count)
> +int64_t hbitmap_next_zero(const HBitmap *hb, int64_t start, int64_t count)
>  {
>      size_t pos = (start >> hb->granularity) >> BITS_PER_LEVEL;
>      unsigned long *last_lev = hb->levels[HBITMAP_LEVELS - 1];
> @@ -202,6 +202,8 @@ int64_t hbitmap_next_zero(const HBitmap *hb, uint64_t 
> start, uint64_t count)
>      uint64_t end_bit, sz;
>      int64_t res;
>  
> +    assert(start >= 0 && count >= 0);
> +
>      if (start >= hb->orig_size || count == 0) {
>          return -1;
>      }
As far as I can see, NBD just passes NBDRequest.from (which is a
uint64_t) to this function (on NBD_CMD_BLOCK_STATUS).  Would this allow
a malicious client to send a value > INT64_MAX, thus provoking an
overflow and killing the server with this new assertion?

On second thought, we have this problem already everywhere in
nbd_handle_request().  I don’t see it or its caller ever checking
whether the received values are in bounds, it just passes them to all
kind of block layer functions that sometimes even just accept plain
ints.  Well, I suppose all other functions just error out, so it
probably isn’t an actual problem in practice so far...

Max

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]