qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ovirt-users] Q: Sparsify/fstrim don't reduce actual disk image size


From: Nir Soffer
Subject: Re: [ovirt-users] Q: Sparsify/fstrim don't reduce actual disk image size
Date: Thu, 8 Apr 2021 18:06:42 +0300

On Thu, Apr 8, 2021 at 10:04 AM Andrei Verovski <andreil1@starlett.lv> wrote:
>
> Hi,
>
> Many thanks, its worked ! Actual size shrunk from 584 to 9 GB, now I have 
> space to backup.
>
> Is there any guidelines how to format QCOW2 images (for Linux) so they can be 
> shrunk in an efficient way?
> With this NextCloud/Collabora LVM I did in the following order:
> swap
> ext2 boot
> ext4 root
> ext4 var (large, for data, all cloud data stored here)
>
> Ext4 partitions on LVM.
>
> Or this is not predictable how data will span QCOW2 space?

I think there is no way to avoid this issue with ovirt block storage.

Regardless of how the data is laid out in the qcow2 image, when we don't
have enough space ovirt extend the disk. This happens many times until
the disk reaches the virtual size (actually more because of qcow2 metadata).

For example we start with 1g image:

[xx------]

Now you write more and ovirt extend the image:

[xxxxxxxx------]

This repeats many times...

[xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-------]

When you sparsify, some clusters are marked as zero (or discarded), but if
they are not at the end of the image we cannot shrink the image.

[xxx----xx-----------xxxxx-xx--------xxx-x------------------xx--xx---------]

When  you copy the image to a new image the discarded or zero parts are
skipped and the new image contains only the actual data.

If qemu will support a way to defragment an image (best online):

[xxxxxxxxxxxxxxxxxxxx-------------------------------------------------------]

ovirt can shrink the logical volume after that.

[xxxxxxxxxxxxxxxxxxxx----]

Adding qemu-block since this is an interesting use case that may be useful
to more qemu users.

Nir

> > On 7 Apr 2021, at 18:43, Nir Soffer <nsoffer@redhat.com> wrote:
> >
> > On Wed, Apr 7, 2021 at 5:36 PM Andrei Verovski <andreil1@starlett.lv> wrote:
> >>
> >> Hi !
> >>
> >> I have VM (under oVirt) with single disk thin provision (~600 GB)
> >
> > I guess you are using block storage?
> >
> >> running NextCloud on Debian 9.
> >> Right now VM HD is almost empty. Unfortunately, it occupies 584 GB 
> >> (virtual size = 600 GB)
> >> All partition (except swap and boot) are EXT4 with discard option.
> >
> > You don't need to use discard mount option. fstrim works without it.
> >
> >> in oVirt “enable discard = on”.
> >>
> >> # fstrim -av runs successfully:
> >> /var: 477.6 GiB (512851144704 bytes) trimmed on 
> >> /dev/mapper/vg--system-lv4--data
> >> /boot: 853.8 MiB (895229952 bytes) trimmed on 
> >> /dev/mapper/vg--system-lv2--boot
> >> /: 88.4 GiB (94888611840 bytes) trimmed on /dev/mapper/vg--system-lv3—sys
> >>
> >> When fstrim runs again, it trims zero. I even run “Sparsify” in oVirt. 
> >> Unfortunately, actual size is still 584 GB.
> >>
> >> Here is /etc/fstab
> >> /dev/mapper/vg--system-lv3--sys /               ext4    
> >> discard,noatime,nodiratime,errors=remount-ro 0       1
> >> /dev/mapper/vg--system-lv2--boot /boot           ext2    defaults        0 
> >>       2
> >> /dev/mapper/vg--system-lv4--data /var            ext4    
> >> discard,noatime,nodiratime 0       2
> >> /dev/mapper/vg--system-lv1--swap none            swap    sw              0 
> >>       0
> >>
> >> When disk was partitioned/formatted, swap and boot were created first and 
> >> positioned at the beginning.
> >>
> >> What is wrong here? Is it possible to fix all this ?
> >
> > Discarding data mark the areas in the qcow2 image as zero, but it does not 
> > move
> > actual data around (which is very slow). So if the clusters were at
> > the end of the
> > image they remain there, and the actual file size is the same.
> >
> > The only way to reclaim the space is:
> > 1. sparsify disk - must be done *before* copying the disk.
> > 2. move this to another storage domain
> > 3. move disk back to the original storage domain
> >
> > We may have an easier and more efficient way in the future that
> > works with single storage domain, but it will have to copy the
> > entire disk. If the disk is really mostly empty it should be fast.
> >
> > Nir
> >
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]