qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] How do you do when write more than 16TB data to qcow2 o


From: Daniel P . Berrangé
Subject: Re: [Qemu-devel] How do you do when write more than 16TB data to qcow2 on ext4?
Date: Thu, 16 Aug 2018 09:22:00 +0100
User-agent: Mutt/1.10.1 (2018-07-13)

On Thu, Aug 16, 2018 at 09:35:52AM +0800, lampahome wrote:
> We all know there's a file size limit 16TB in ext4 and other fs has their
> limit,too.
> 
> If I create an qcow2 20TB on ext4 and write to it more than 16TB. Data more
> than 16TB can't be written to qcow2.
> 
> So, is there any better ways to solve this situation?

I'd really just recommend using a different filesystem, in particular XFS
has massively higher file size limit - tested to 500 TB in RHEL-7, with a
theoretical max size of 8 EB. It is a very mature filesystem & the default
in RHEL-7.

> What I thought is to create new qcow2 called qcow2-new and setup the
> backing file be the previous qcow2.

A bit of a hack but it could work, albeit with the extra pain for managing
your VMs. If you create the qcow2 layer and the guest rewrites existing
written blocks you're going to end up storing data twice (used original
data in the backing file, and new active data in the top layer). So your
20 TB disk may end up storing waaay more than 20 TB.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



reply via email to

[Prev in Thread] Current Thread [Next in Thread]