qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [Qemu-devel] [PATCH 0/2] buffer and delay backup COW wr


From: Liang Li
Subject: Re: [Qemu-block] [Qemu-devel] [PATCH 0/2] buffer and delay backup COW write operation
Date: Mon, 6 May 2019 12:24:52 +0800
User-agent: Mutt/1.7.2 (2016-11-26)

On Tue, Apr 30, 2019 at 10:35:32AM +0000, Vladimir Sementsov-Ogievskiy wrote:
> 28.04.2019 13:01, Liang Li wrote:
> > If the backup target is a slow device like ceph rbd, the backup
> > process will affect guest BLK write IO performance seriously,
> > it's cause by the drawback of COW mechanism, if guest overwrite the
> > backup BLK area, the IO can only be processed after the data has
> > been written to backup target.
> > The impact can be relieved by buffering data read from backup
> > source and writing to backup target later, so the guest BLK write
> > IO can be processed in time.
> > Data area with no overwrite will be process like before without
> > buffering, in most case, we don't need a very large buffer.
> > 
> > An fio test was done when the backup was going on, the test resut
> > show a obvious performance improvement by buffering.
> 
> Hi Liang!
> 
> Good thing. Something like this I've briefly mentioned in my KVM Forum 2018
> report as "RAM Cache", and I'd really prefer this functionality to be a 
> separate
> filter, instead of complication of backup code. Further more, write notifiers
> will go away from backup code, after my backup-top series merged.
> 
> v5: https://lists.gnu.org/archive/html/qemu-devel/2018-12/msg06211.html
> and separated preparing refactoring v7: 
> https://lists.gnu.org/archive/html/qemu-devel/2019-04/msg04813.html
> 
> RAM Cache should be a filter driver, with an in-memory buffer(s) for data 
> written to it
> and with ability to flush data to underlying backing file.
> 
> Also, here is another approach for the problem, which helps if guest writing 
> activity
> is really high and long and buffer will be filled and performance will 
> decrease anyway:
> 
> 1. Create local temporary image, and COWs will go to it. (previously 
> considered on list, that we should call
> these backup operations issued by guest writes CBW = copy-before-write, as 
> copy-on-write
> is generally another thing, and using this term in backup is confusing).
> 
> 2. We also set original disk as a backing for temporary image, and start 
> another backup from
> temporary to real target.
> 
> This scheme is almost possible now, you need to start backup(sync=none) from 
> source to temp,
> to do [1]. Some patches are still needed to allow such scheme. I didn't send 
> them, as I want
> my other backup patches go first anyway. But I can. On the other hand if 
> approach with in-memory
> buffer works for you it may be better.
> 
> Also, I'm not sure for now, should we really do this thing through two backup 
> jobs, or we just
> need one separate backup-top filter and one backup job without filter, or we 
> need an additional
> parameter for backup job to set cache-block-node.
> 

Hi Vladimir,

   Thanks for your valuable information. I didn't notice that you are already 
working on
this,  so my patch will conflict with your work. We have thought about the way 
[2] and
give it up because it would affect local storage performance.
   I have read your slice in KVM Forum 2018 and the related patches, your 
solution can
help to solve the issues in backup. I am not sure if the "RAM cache" is a qcow2 
file in
RAM? if so, your implementation will free the RAM space occupied by BLK data 
once it's
written to the far target in time? or we may need a large cache to make things 
work.
   Two backup jobs seems complex and not user friendly, is it possible to make 
my patch
cowork with CBW?

Liang



reply via email to

[Prev in Thread] Current Thread [Next in Thread]