qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH v3 30/39] qcow2: Update expand_zero_clusters_in_


From: Eric Blake
Subject: Re: [Qemu-block] [PATCH v3 30/39] qcow2: Update expand_zero_clusters_in_l1() to support L2 slices
Date: Fri, 26 Jan 2018 13:46:19 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.2

On 01/26/2018 08:59 AM, Alberto Garcia wrote:
> expand_zero_clusters_in_l1() expands zero clusters as a necessary step
> to downgrade qcow2 images to a version that doesn't support metadata
> zero clusters. This function takes an L1 table (which may or may not
> be active) and iterates over all its L2 tables looking for zero
> clusters.
> 
> Since we'll be loading L2 slices instead of full tables we need to add
> an extra loop that iterates over all slices of each L2 table, and we
> should also use the slice size when allocating the buffer used when
> the L1 table is not active.
> 
> This function doesn't need any additional changes so apart from that
> this patch simply updates the variable name from l2_table to l2_slice.
> 
> Finally, and since we have to touch the bdrv_read() / bdrv_write()
> calls anyway, this patch takes the opportunity to replace them with
> the byte-based bdrv_pread() / bdrv_pwrite().

This last paragraph could perhaps be split to a separate patch, but
that's more churn so I'm also fine leaving it here.

> 
> Signed-off-by: Alberto Garcia <address@hidden>
> ---
>  block/qcow2-cluster.c | 52 
> ++++++++++++++++++++++++++++-----------------------
>  1 file changed, 29 insertions(+), 23 deletions(-)

Reviewed-by: Eric Blake <address@hidden>

> @@ -1905,22 +1908,24 @@ static int 
> expand_zero_clusters_in_l1(BlockDriverState *bs, uint64_t *l1_table,
>              goto fail;
>          }
>  
> -        {
> +        for (slice = 0; slice < n_slices; slice++) {
> +            uint64_t slice_offset = l2_offset + slice * slice_size2;
> +            bool l2_dirty = false;
>              if (is_active_l1) {
>                  /* get active L2 tables from cache */
> -                ret = qcow2_cache_get(bs, s->l2_table_cache, l2_offset,
> -                                      (void **)&l2_table);
> +                ret = qcow2_cache_get(bs, s->l2_table_cache, slice_offset,
> +                                      (void **)&l2_slice);

The (void **) cast is probably still necessary (anything can go to
void*, but C gets pickier when going to void**), but...

>              } else {
>                  /* load inactive L2 tables from disk */
> -                ret = bdrv_read(bs->file, l2_offset / BDRV_SECTOR_SIZE,
> -                                (void *)l2_table, s->cluster_sectors);
> +                ret = bdrv_pread(bs->file, slice_offset,
> +                                 (void *)l2_slice, slice_size2);

...do we still need this cast to void*?


>  
> -                    ret = bdrv_write(bs->file, l2_offset / BDRV_SECTOR_SIZE,
> -                                     (void *)l2_table, s->cluster_sectors);
> +                    ret = bdrv_pwrite(bs->file, slice_offset,
> +                                      (void *)l2_slice, slice_size2);

and again here

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]