qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: how to improve qcow performance?


From: Geraldo Netto
Subject: Re: how to improve qcow performance?
Date: Wed, 21 Jul 2021 14:20:10 +0200

Dear Nir/Friends,

On Tue, 20 Jul 2021 at 11:34, Nir Soffer <nsoffer@redhat.com> wrote:
>
> On Thu, Jul 15, 2021 at 2:33 PM Geraldo Netto <geraldonetto@gmail.com> wrote:
> >
> > Dear Friends,
> >
> > I beg your pardon for such a newbie question
> > But I would like to better understand how to improve the qcow performance
>
> I guess you mean how to improve "qcow2" performance. If you use "qcow"
> format the best way is to switch to "qcow2".

I read here [1] there was a qcow3, but it seems that page is
deprecated (last update on sept. 2016)

> > I was checking the qemu-img and it seems that the following parameters
> > are the most relevant to optimise the performance, no?
> >
> >   'cache' is the cache mode used to write the output disk image, the valid
> >     options are: 'none', 'writeback' (default, except for convert),
> > 'writethrough',
> >     'directsync' and 'unsafe' (default for convert)
> >
> > Should I infer that directsync means bypass all the stack and write
> > directly to the disk?
>
> 'directsync' is using direct I/O, but calls fsync() for every write. This is
> the slowest way and does not make sense for converting images.
>
> 'none' uses direct I/O (O_DIRECT). This enables native async I/O (libaio)
> which can give better performance in some cases.
>
> 'writeback' uses the page cache, considering the write complete when the
> data is in the page cache, and reading data from the page cache. This is
> likely to give the best performance, but is also likely to give inconsistent
> performance and cause trouble for other applications.
>
> The kernel will write a huge amount of data to the page cache, and from time
> to time try to flush a huge amount of data, which can cause long delays in
> other processes accessing the same storage. It also pollutes the page cache
> with data that may not be needed after the image is converted, for example
> when you convert an image on one host, writing to shared storage, and the
> image is used later on another host.
>
> 'writethrough' seems to use the pagecache, but it reports writes only after
> data is flushed so it will be slow as 'directsync' for writing, and
> can cause the
> same issues with the page cache as 'writeback'.
>
> 'unsafe' (default for convert) means writes are never flushed to disk, which 
> is
> unsafe when using in vm's -drive option, but completely safe when using in
> qemu-img convert, since qemu-img completes the operation with fsync().
>
> The most important option for performance is -W (unordered writes).
> For writing to block devices, it is up to 6 times faster. But it can cause
> fragmentation so you may get faster copies but accessing the image
> later may be slower.

I see! Now I get it

> Check this for example of -W usage:
> https://bugzilla.redhat.com/1511891#c57
>
> Finally there is the -m option - the default value (8) gives good performance,
> but using -m 16 can be a little faster.
>
> >   'src_cache' is the cache mode used to read input disk images, the valid
> >     options are the same as for the 'cache' option
> >
> > I didn`t follow where should I look to check the 'cache' options :`(
>
>        -t CACHE
>               Specifies  the cache mode that should be used with the
> (destination) file.
>               See the documentation of the emulator's -drive cache=...
> option for allowed values.
>
> "See the documentation of the amulator -drive cache=" means see qemu(1).
>
> > I guess that using smaller files is more performance due to the
> > reduced number of metadata to handle?
>
> What do you mean by smaller files?

I mean, by reducing the size of a qcow image and distribute them among
different NAS
it would reduce the pressure on metadata updating of the qcow image
and that would reflect in better performance, no? (it`s just an intuition)

Just to describe the scenario, we have an all cloud env. using
kubernetes with longhorn
and behind the scenes there are qcow images mapped for each block
device exposed on kubernetes
We are studying ways to optimise it and specially replace the NFS
architecture that we have now (too slow for our needs)

> > In any case, I saw the qemu-io command and I plan to stress test it
>
> The best test is to measure the actual operation with qemu-img convert
> with different options and the relevant storage.

Interesting catch, will certainly check it out!!!

> Nir
>


[1] https://wiki.qemu.org/Features/Qcow3


Geraldo Netto
site: http://exdev.sf.net
github: https://github.com/geraldo-netto
linkedin: https://www.linkedin.com/in/geraldonetto
facebook: https://web.facebook.com/geraldo.netto.161



reply via email to

[Prev in Thread] Current Thread [Next in Thread]