qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH] block: don't probe zeroes in bs->file by defaul


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [Qemu-block] [PATCH] block: don't probe zeroes in bs->file by default on block_status
Date: Thu, 24 Jan 2019 15:47:57 +0000

24.01.2019 18:31, Kevin Wolf wrote:
> Am 24.01.2019 um 15:36 hat Vladimir Sementsov-Ogievskiy geschrieben:
>> 23.01.2019 19:33, Kevin Wolf wrote:
>>> Am 23.01.2019 um 12:53 hat Vladimir Sementsov-Ogievskiy geschrieben:
>>>> 22.01.2019 21:57, Kevin Wolf wrote:
>>>>> Am 11.01.2019 um 12:40 hat Vladimir Sementsov-Ogievskiy geschrieben:
>>>>>> 11.01.2019 13:41, Kevin Wolf wrote:
>>>>>>> Am 10.01.2019 um 14:20 hat Vladimir Sementsov-Ogievskiy geschrieben:
>>>>>>>> drv_co_block_status digs bs->file for additional, more accurate search
>>>>>>>> for hole inside region, reported as DATA by bs since 5daa74a6ebc.
>>>>>>>>
>>>>>>>> This accuracy is not free: assume we have qcow2 disk. Actually, qcow2
>>>>>>>> knows, where are holes and where is data. But every block_status
>>>>>>>> request calls lseek additionally. Assume a big disk, full of
>>>>>>>> data, in any iterative copying block job (or img convert) we'll call
>>>>>>>> lseek(HOLE) on every iteration, and each of these lseeks will have to
>>>>>>>> iterate through all metadata up to the end of file. It's obviously
>>>>>>>> ineffective behavior. And for many scenarios we don't need this lseek
>>>>>>>> at all.
>>>>>>>>
>>>>>>>> So, let's "5daa74a6ebc" by default, leaving an option to return
>>>>>>>> previous behavior, which is needed for scenarios with preallocated
>>>>>>>> images.
>>>>>>>>
>>>>>>>> Add iotest illustrating new option semantics.
>>>>>>>>
>>>>>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
>>>>>>>
>>>>>>> I still think that an option isn't a good solution and we should try use
>>>>>>> some heuristics instead.
>>>>>>
>>>>>> Do you think that heuristics would be better than fair cache for lseek 
>>>>>> results?
>>>>>
>>>>> I just played a bit with this (qemu-img convert only), and how much
>>>>> caching lseek() results helps depends completely on the image. As it
>>>>> happened, my test image was the worst case where caching didn't buy us
>>>>> much. Obviously, I can just as easily construct an image where it makes
>>>>> a huge difference. I think that most real-world images should be able to
>>>>> take good advantage of it, though, and it doesn't hurt, so maybe that's
>>>>> a first thing that we can do in any case. It might not be the complete
>>>>> solution, though.
>>>>>
>>>>> Let me explain my test images: The case where all of this actually
>>>>> matters for qemu-img convert is fragmented qcow2 images. If your image
>>>>> isn't fragmented, we don't do lseek() a lot anyway because a single
>>>>> bdrv_block_status() call already gives you the information for the whole
>>>>> image. So I constructed a fragmented image, by writing to it backwards:
>>>>>
>>>>> ./qemu-img create -f qcow2 /tmp/test.qcow2 1G
>>>>> for i in $(seq 16384 -1 0); do
>>>>>        echo "write $((i * 65536)) 64k"
>>>>> done | ./qemu-io /tmp/test.qcow2
>>>>>
>>>>> It's not really surprising that caching the lseek() results doesn't help
>>>>> much there as we're moving backwards and lseek() only returns results
>>>>> about the things after the current position, not before the current
>>>>> position. So this is probably the worst case.
>>>>>
>>>>> So I constructed a second image, which is fragmented, too, but starts at
>>>>> the beginning of the image file:
>>>>>
>>>>> ./qemu-img create -f qcow2 /tmp/test_forward.qcow2 1G
>>>>> for i in $(seq 0 2 16384); do
>>>>>        echo "write $((i * 65536)) 64k"
>>>>> done | ./qemu-io /tmp/test_forward.qcow2
>>>>> for i in $(seq 1 2 16384); do
>>>>>        echo "write $((i * 65536)) 64k"
>>>>> done | ./qemu-io /tmp/test_forward.qcow2
>>>>>
>>>>> Here caching makes a huge difference:
>>>>>
>>>>>        time ./qemu-img convert -p -n $IMG null-co://
>>>>>
>>>>>                            uncached        cached
>>>>>        test.qcow2             ~145s         ~70s
>>>>>        test_forward.qcow2     ~110s        ~0.2s
>>>>
>>>> Unsure about your results, at least 0.2s means, that we benefit from
>>>> cached read, not lseek.
>>>
>>> Yes, all reads are from the kernel page cache, this is on tmpfs.
>>>
>>> I chose tmpfs for two reasons: I wanted to get expensive I/O out of the
>>> way so that the lseek() performance is even visible; and tmpfs was
>>> reported to perform especially bad for SEEK_DATA/HOLE (which my results
>>> confirm). So yes, this setup really makes the lseek() calls stand out
>>> much more than in the common case (which makes sense when you want to
>>> fix the overhead introduced by them).
>>
>> Ok, missed this. On the other hand tmpfs is not a real production case..
> 
> Yes, I fully agree. But it was a simple case where I knew there is a
> problem.
> 
> I also have a bug report on XFS with an image that is very fragmented on
> the file system level. But I don't know how to produce such a file to
> run benchmarks on it.
> 

I've experimented around very fragmented images, but didn't find lseek problems,
may be because I don't have enough big hdd to test. Here is a program I used to
produce fragmented file. The idea is fallocate all space, and then reallocate 
it.
Usage is as simple as

./frag /data/test 500G

Attached code may be ugly, I didn't prepare it for publishing(

-- 
Best regards,
Vladimir

Attachment: frag.c
Description: frag.c


reply via email to

[Prev in Thread] Current Thread [Next in Thread]