[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [Qemu-devel] [PATCH v2 6/7] block/backup: teach backup_

From: Vladimir Sementsov-Ogievskiy
Subject: Re: [Qemu-block] [Qemu-devel] [PATCH v2 6/7] block/backup: teach backup_cow_with_bounce_buffer to copy more at once
Date: Sat, 10 Aug 2019 12:54:49 +0000

10.08.2019 15:47, Eric Blake wrote:
> On 8/10/19 7:12 AM, Vladimir Sementsov-Ogievskiy wrote:
>> 09.08.2019 18:59, Eric Blake wrote:
>>> On 8/9/19 10:32 AM, Vladimir Sementsov-Ogievskiy wrote:
>>>> backup_cow_with_offload can transfer more than on cluster. Let
>>> s/on/one/
>>>> backup_cow_with_bounce_buffer behave similarly. It reduces number
>>>> of IO and there are no needs to copy cluster by cluster.
>>> It reduces the number of IO requests, since there is no need to copy
>>> cluster by cluster.
>>>> -                                                      bool *error_is_read,
>>>> -                                                      void 
>>>> **bounce_buffer)
>>>> +                                                      bool *error_is_read)
>>> Why is this signature changing?
>> 2. Actually it is a question about memory allocator: is it fast and optimized
>> enough to not care, or is it bad, and we should work-around it like in
>> mirror? And in my opinion (proved by a bit of benchmarking) memalign
>> is fast enough to don't care. I was answering similar question in more 
>> details
>> and with some numbers here:
>> https://lists.gnu.org/archive/html/qemu-devel/2018-11/msg00087.html
>> So, I'd prefere to not care and keep code simpler. But if you don't agree,
>> I can leave shared buffer here, at least until introduction of parallel 
>> requests.
>> Then, it will be something like qemu_try_blockalign(MIN(bytes, 64M))..
> It may still be worth capping at 64M.  I'm not opposed to a change to
> per-request allocation rather than trying to reuse a buffer, if it is
> going to make parallelization efforts easier; but I am worried about the
> maximum memory usage.  I'm more worried that you are trying to cram two
> distinct changes into one patch, and didn't even mention the change
> about a change from buffer reuse to per-request allocations, in the
> commit message.  If you make that sort of change, it should be called
> out in the commit message as intentional, or maybe even split to a
> separate patch.

OK, I failed to hide it :) Will split out and add 64M limit.

Best regards,

reply via email to

[Prev in Thread] Current Thread [Next in Thread]