qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH 0/6] qcow2: Make the L2 cache cover the whole im


From: Alberto Garcia
Subject: Re: [Qemu-block] [PATCH 0/6] qcow2: Make the L2 cache cover the whole image by default
Date: Mon, 06 Aug 2018 13:07:09 +0200
User-agent: Notmuch/0.18.2 (http://notmuchmail.org) Emacs/24.4.1 (i586-pc-linux-gnu)

On Mon 06 Aug 2018 12:45:20 PM CEST, Kevin Wolf wrote:
> Am 06.08.2018 um 09:47 hat Alberto Garcia geschrieben:
>> On Fri 03 Aug 2018 04:55:42 PM CEST, Kevin Wolf wrote:
>> > By the way, weren't you working on subclusters a while ago? How did
>> > that go? Because I think those would enable us to use larger
>> > cluster sizes and therefore reduce the metadata sizes as well.
>> 
>> I had a working prototype, but the changes to both the code and the
>> on-disk format were not trivial. I would need to re-evaluate its
>> performance impact after all the changes that we have had since then,
>> and then see if it's worth trying again.
>> 
>> I suppose that the main benefit of having subclusters is that
>> allocations are much faster. Everything else remains more or less the
>> same, and in particular you can already use larger clusters if you
>> want to reduce the metadata sizes. Plus, with the l2-cache-entry-size
>> option you can already solve some of the problems of having large
>> clusters.
>
> Yes, indeed, subclusters are about making COW less painful (or getting
> rid of it altogether). Doing COW for full 2 MB when the guest updates
> 4k is just a bit over the top and I think it slows down initial writes
> seriously. I haven't benchmarked things in a while, though.

Me neither, I think the most recent ones that I have are from last year:

https://lists.gnu.org/archive/html/qemu-devel/2017-04/msg01033.html

Since then we changed the cow algorithm to do only 2 operations instead
of 5, so that can affect the best case scenario.

> While reasonable cache settings and potentially also avoiding
> fragmentation are probably more important overall, I don't think we
> can completely ignore initial writes. They are part of the cost of
> snapshots, they are what people are seeing first and also what
> benchmarks generally show.

If I have some time I could try to test the patches again on top of QEMU
3.0 and see what happens.

Berto



reply via email to

[Prev in Thread] Current Thread [Next in Thread]