[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH 4/8] block/backup: improve unallocated clusters

From: Max Reitz
Subject: Re: [Qemu-block] [PATCH 4/8] block/backup: improve unallocated clusters skipping
Date: Fri, 9 Aug 2019 14:53:31 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0

On 09.08.19 14:47, Vladimir Sementsov-Ogievskiy wrote:
> 09.08.2019 15:25, Max Reitz wrote:
>> On 09.08.19 09:50, Vladimir Sementsov-Ogievskiy wrote:
>>> 07.08.2019 21:01, Max Reitz wrote:
>>>> On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
>>>>> Limit block_status querying to request bounds on write notifier to
>>>>> avoid extra seeking.
>>>> I don’t understand this reasoning.  Checking whether something is
>>>> allocated for qcow2 should just mean an L2 cache lookup.  Which we have
>>>> to do anyway when we try to copy data off the source.
>>> But for raw it's seeking.
>> (1) That’s a bug in block_status then, isn’t it?
>> file-posix cannot determine the allocation status, or rather, everything
>> is allocated.  bdrv_co_block_status() should probably pass @want_zero on
>> to the driver’s implementation, and file-posix should just
>> unconditionally return DATA if it’s false.
>> (2) Why would you even use sync=top for raw nodes?
> As I described in parallel letters, raw was bad example. NBD is good.

Does NBD support backing files?

> Anyway, now I'm refactoring cluster skipping more deeply for v2.
> About top-mode: finally block-status should be used to improve other
> modes too. In virtuozzo we skip unallocated for full mode too, for example.

But this patch here is about sync=top.

Skipping is an optimization, the block_status querying here happens
because copying anything that isn’t allocated in the top layer would be


> Unfortunately, backup is most long-term thing to upstream for me..

Attachment: signature.asc
Description: OpenPGP digital signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]