qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Re: [PATCH] qcow2/virtio corruption: Don't allocate the


From: Kevin Wolf
Subject: Re: [Qemu-devel] Re: [PATCH] qcow2/virtio corruption: Don't allocate the same cluster twice
Date: Thu, 07 May 2009 10:01:42 +0200
User-agent: Thunderbird 2.0.0.21 (X11/20090320)

Gleb Natapov schrieb:
> The reason we need to copy unmodified sectors in alloc_cluster_link_l2()
> is exactly to handle concurrent writes into the same cluster. This is
> essentially RMW. I don't see why concurrent writes should not work with
> the logic in place. There is a bug there currently of cause :) Can
> somebody check this patch:
> 
> 
> diff --git a/block-qcow2.c b/block-qcow2.c
> index 7840634..801d26d 100644
> --- a/block-qcow2.c
> +++ b/block-qcow2.c
> @@ -995,8 +995,8 @@ static int alloc_cluster_link_l2(BlockDriverState *bs, 
> uint64_t cluster_offset,
>          if(l2_table[l2_index + i] != 0)
>              old_cluster[j++] = l2_table[l2_index + i];
>  
> -        l2_table[l2_index + i] = cpu_to_be64((cluster_offset +
> -                    (i << s->cluster_bits)) | QCOW_OFLAG_COPIED);
> +        l2_table[l2_index + i] = cpu_to_be64(((cluster_offset +
> +                    (i << s->cluster_bits)) | QCOW_OFLAG_COPIED));
>       }
>  
>      if (bdrv_pwrite(s->hd, l2_offset + l2_index * sizeof(uint64_t),
> @@ -1005,7 +1005,8 @@ static int alloc_cluster_link_l2(BlockDriverState *bs, 
> uint64_t cluster_offset,
>          goto err;
>  
>      for (i = 0; i < j; i++)
> -        free_any_clusters(bs, be64_to_cpu(old_cluster[i]), 1);
> +        free_any_clusters(bs, be64_to_cpu(old_cluster[i]) & 
> ~QCOW_OFLAG_COPIED,
> +                          1);
>  
>      ret = 0;
>  err:

After Gleb explained to me how the whole thing is meant to work, I agree
that this is the right fix. The first hunk is meaningless though. I
suggest to replace it by a hunk adding a big comment explaining how
things work. ;-)

Kevin




reply via email to

[Prev in Thread] Current Thread [Next in Thread]