qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH 2/2] qcow2: Avoid memory over-allocation on comp


From: Alberto Garcia
Subject: Re: [Qemu-block] [PATCH 2/2] qcow2: Avoid memory over-allocation on compressed images
Date: Wed, 21 Feb 2018 11:04:03 +0100
User-agent: Notmuch/0.18.2 (http://notmuchmail.org) Emacs/24.4.1 (i586-pc-linux-gnu)

On Tue 20 Feb 2018 11:24:59 PM CET, Eric Blake wrote:

I was also preparing a patch to change this, but you arrived first :-)

> So, it's time to cut back on the waste.  A compressed cluster
> will NEVER occupy more than an uncompressed cluster (okay, gzip
> DOES document that because the compression stream adds metadata,
> and because of the pigeonhole principle, there are worst case
> scenarios where attempts to compress will actually inflate an
> image - but in those cases, we would just write the cluster
> uncompressed instead of inflating it).  And as that is a smaller
> amount of memory, we can get by with the simpler g_malloc.

> -        if (!s->cluster_cache) {
> -            s->cluster_cache = g_malloc(s->cluster_size);
> +            assert(!s->cluster_cache);
> +            s->cluster_data = g_try_malloc(s->cluster_size);
> +            s->cluster_cache = g_try_malloc(s->cluster_size);

There's a few things here:

- QEMU won't write compressed data if the size is >= s->cluster_size
  (there's an explicit check for that in qcow2_co_pwritev_compressed())

- The size field of the compressed cluster descriptor *does* allow
  larger sizes, so you can't simply read csize bytes into
  s->cluster_data becuase you could cause a buffer overflow.

- Solution a: check that csize < s->cluster_size and return an error if
  it's not. However! although QEMU won't produce an image with a
  compressed cluster that is larger than the uncompressed one, the qcow2
  on-disk format in principle allows for that, so arguably we should
  accept it.

- Solution b: the width of the 'compressed cluster size' field is
  (cluster_bits - 8), that's (cluster_size / 256) sectors. Since the
  size of each sector is 512 bytes, the maximum possible size that the
  field can store is (cluster_size * 2) bytes. So allocate that amount
  of space for s->cluster_data, read the data as it is on disk and let
  the decompression code return an error if the data is indeed
  corrupted or it doesn't fit in the output buffer.

Berto



reply via email to

[Prev in Thread] Current Thread [Next in Thread]