qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ovirt-devel] Disk sizes not updated on unmap/discard


From: Eric Blake
Subject: Re: [ovirt-devel] Disk sizes not updated on unmap/discard
Date: Fri, 2 Oct 2020 10:03:22 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0

On 10/2/20 3:41 AM, Kevin Wolf wrote:

>> Kevin, is this the expected behavior or a bug in qemu?
>>
>> The disk I tested is a single qcow2 image without the backing file, so
>> theoretically qemu can deallocate all the discarded clusters.
> 
> This is expected. Discard just frees the cluster whereever it is stored,
> but it doesn't compact the image, i.e. move data at higher offsets to
> lower offsets (which would be a rather expensive operation).
> 
> If your storage supports thin provisioning/hole punching (the most
> common case of this is sparse files on a filesystem), then you can use
> the freed space for something else. If it doesn't, it's just marked free
> on the qcow2 level and future writes to the image will allocate the
> freed space first instead of growing the image, but you won't be able to
> use it for things outside of the image.
> 
> In contrast, 'qemu-img convert' starts with an empty file and only
> writes what needs to be written, so it will result in a compacted image
> file that doesn't have holes and is as short as it can be.

Of course, writing a tool to defragment qcow2 files in-place is not a
bad idea, if someone wants a potentially fun project.  But it's not the
highest priority task (since copying to fresh storage gets the same
effect, albeit with a temporarily larger storage requirement), so I
won't hold my breath on someone jumping into such a task in the near future.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]