qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: backup_calculate_cluster_size does not consider source


From: Max Reitz
Subject: Re: backup_calculate_cluster_size does not consider source
Date: Wed, 6 Nov 2019 14:17:20 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.1.1

On 06.11.19 14:09, Dietmar Maurer wrote:
>> Let me elaborate: Yes, a cluster size generally means that it is most
>> “efficient” to access the storage at that size.  But there’s a tradeoff.
>>  At some point, reading the data takes sufficiently long that reading a
>> bit of metadata doesn’t matter anymore (usually, that is).
> 
> Any network storage suffers from long network latencies, so it always
> matters if you do more IOs than necessary.

Yes, exactly, that’s why I’m saying it makes sense to me to increase the
buffer size from the measly 64 kB that we currently have.  I just don’t
see the point of increasing it exactly to the source cluster size.

>> There is a bit of a problem with making the backup copy size rather
>> large, and that is the fact that backup’s copy-before-write causes guest
>> writes to stall. So if the guest just writes a bit of data, a 4 MB
>> buffer size may mean that in the background it will have to wait for 4
>> MB of data to be copied.[1]
> 
> We use this for several years now in production, and it is not a problem.
> (Ceph storage is mostly on 10G (or faster) network equipment).

So you mean for cases where backup already chooses a 4 MB buffer size
because the target has that cluster size?

>> Hm.  OTOH, we have the same problem already with the target’s cluster
>> size, which can of course be 4 MB as well.  But I can imagine it to
>> actually be important for the target, because otherwise there might be
>> read-modify-write cycles.
>>
>> But for the source, I still don’t quite understand why rbd has such a
>> problem with small read requests.  I don’t doubt that it has (as you
>> explained), but again, how is it then even possible to use rbd as the
>> backend for a guest that has no idea of this requirement?  Does Linux
>> really prefill the page cache with 4 MB of data for each read?
> 
> No idea. I just observed that upstream qemu backups with ceph are 
> quite unusable this way.

Hm, OK.

Max

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]