[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH v3 6/7] block/backup: teach backup_cow_with_boun
From: |
Max Reitz |
Subject: |
Re: [Qemu-devel] [PATCH v3 6/7] block/backup: teach backup_cow_with_bounce_buffer to copy more at once |
Date: |
Tue, 13 Aug 2019 18:51:29 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 |
On 13.08.19 18:45, Vladimir Sementsov-Ogievskiy wrote:
> 13.08.2019 19:30, Max Reitz wrote:
>> On 13.08.19 17:32, Vladimir Sementsov-Ogievskiy wrote:
>>> 13.08.2019 18:02, Max Reitz wrote:
>>>> On 13.08.19 17:00, Vladimir Sementsov-Ogievskiy wrote:
>>>>> 13.08.2019 17:57, Max Reitz wrote:
>>>>>> On 13.08.19 16:39, Vladimir Sementsov-Ogievskiy wrote:
>>>>>>> 13.08.2019 17:23, Max Reitz wrote:
>>>>>>>> On 13.08.19 16:14, Vladimir Sementsov-Ogievskiy wrote:
>>
>> [...]
>>
>>>>>>>>> But still..
>>>>>>>>>
>>>>>>>>> Synchronous mirror allocates full-request buffers on guest write. Is
>>>>>>>>> it correct?
>>>>>>>>>
>>>>>>>>> If we assume that it is correct to double memory usage of guest
>>>>>>>>> operations, than for backup
>>>>>>>>> the problem is only in write_zero and discard where guest-assumed
>>>>>>>>> memory usage should be zero.
>>>>>>>>
>>>>>>>> Well, but that is the problem. I didn’t say anything in v2, because I
>>>>>>>> only thought of normal writes and I found it fine to double the memory
>>>>>>>> usage there (a guest won’t issue huge write requests in parallel). But
>>>>>>>> discard/write-zeroes are a different matter.
>>>>>>>>
>>>>>>>>> And if we should distinguish writes from write_zeroes and discard,
>>>>>>>>> it's better to postpone this
>>>>>>>>> improvement to be after backup-top filter merged.
>>>>>>>>
>>>>>>>> But do you need to distinguish it? Why not just keep track of memory
>>>>>>>> usage and put the current I/O coroutine to sleep in a CoQueue or
>>>>>>>> something, and wake that up at the end of backup_do_cow()?
>>>>>>>>
>>>>>>>
>>>>>>> 1. Because if we _can_ allow doubling of memory, it's more effective to
>>>>>>> not restrict allocations on
>>>>>>> guest writes. It's just seems to be more effective technique.
>>>>>>
>>>>>> But the problem with backup and zero writes/discards is that the memory
>>>>>> is not doubled. The request doesn’t need any memory, but the CBW
>>>>>> operation does, and maybe lots of it.
>>>>>>
>>>>>> So the guest may issue many zero writes/discards in parallel and thus
>>>>>> exhaust memory on the host.
>>>>>
>>>>> So this is the reason to separate writes from write-zeros/discrads. So at
>>>>> least write will be happy. And I
>>>>> think that write is more often request than write-zero/discard
>>>>
>>>> But that makes it complicated for no practical gain whatsoever.
>>>>
>>>>>>
>>>>>>> 2. Anyway, I'd allow some always-available size to allocate - let it be
>>>>>>> one cluster, which will correspond
>>>>>>> to current behavior and prevent guest io hang in worst case.
>>>>>>
>>>>>> The guest would only hang if it we have to copy more than e.g. 64 MB at
>>>>>> a time. At which point I think it’s not unreasonable to sequentialize
>>>>>> requests.
>>>>
>>>> Because of this. How is it bad to start sequentializing writes when the
>>>> data exceeds 64 MB?
>>>>
>>>
>>> So you want total memory limit of 64 MB? (with possible parameter like in
>>> mirror)
>>>
>>> And allocation algorithm to copy count bytes:
>>>
>>> if free_mem >= count: allocate count bytes
>>> else if free_mem >= cluster: allocate cluster and copy in a loop
>>> else wait in co-queue until some memory available and retry
>>>
>>> Is it OK for you?
>>
>> Sounds good to me, although I don’t know whether the second branch is
>> necessary. As I’ve said, the total limit is just an insurance against a
>> guest that does some crazy stuff.
>>
>
> I'm afraid that if there would be one big request it may wait forever while
> smaller
> requests will eat most of available memory. So it would be unfair queue:
> smaller
> requests will have higher priority in low memory case. With [2] it becomes
> more fair.
OK. Sounds reasonable.
Max
signature.asc
Description: OpenPGP digital signature
- Re: [Qemu-devel] [PATCH v3 6/7] block/backup: teach backup_cow_with_bounce_buffer to copy more at once, (continued)
- Re: [Qemu-devel] [PATCH v3 6/7] block/backup: teach backup_cow_with_bounce_buffer to copy more at once, Vladimir Sementsov-Ogievskiy, 2019/08/12
- Re: [Qemu-devel] [PATCH v3 6/7] block/backup: teach backup_cow_with_bounce_buffer to copy more at once, Vladimir Sementsov-Ogievskiy, 2019/08/13
- Re: [Qemu-devel] [PATCH v3 6/7] block/backup: teach backup_cow_with_bounce_buffer to copy more at once, Max Reitz, 2019/08/13
- Re: [Qemu-devel] [PATCH v3 6/7] block/backup: teach backup_cow_with_bounce_buffer to copy more at once, Vladimir Sementsov-Ogievskiy, 2019/08/13
- Re: [Qemu-devel] [PATCH v3 6/7] block/backup: teach backup_cow_with_bounce_buffer to copy more at once, Max Reitz, 2019/08/13
- Re: [Qemu-devel] [PATCH v3 6/7] block/backup: teach backup_cow_with_bounce_buffer to copy more at once, Vladimir Sementsov-Ogievskiy, 2019/08/13
- Re: [Qemu-devel] [PATCH v3 6/7] block/backup: teach backup_cow_with_bounce_buffer to copy more at once, Max Reitz, 2019/08/13
- Re: [Qemu-devel] [PATCH v3 6/7] block/backup: teach backup_cow_with_bounce_buffer to copy more at once, Vladimir Sementsov-Ogievskiy, 2019/08/13
- Re: [Qemu-devel] [PATCH v3 6/7] block/backup: teach backup_cow_with_bounce_buffer to copy more at once, Max Reitz, 2019/08/13
- Re: [Qemu-devel] [PATCH v3 6/7] block/backup: teach backup_cow_with_bounce_buffer to copy more at once, Vladimir Sementsov-Ogievskiy, 2019/08/13
- Re: [Qemu-devel] [PATCH v3 6/7] block/backup: teach backup_cow_with_bounce_buffer to copy more at once,
Max Reitz <=
[Qemu-devel] [PATCH v3 3/7] block/io: handle alignment and max_transfer for copy_range, Vladimir Sementsov-Ogievskiy, 2019/08/10
[Qemu-devel] [PATCH v3 4/7] block/backup: drop handling of max_transfer for copy_range, Vladimir Sementsov-Ogievskiy, 2019/08/10
[Qemu-devel] [PATCH v3 7/7] block/backup: merge duplicated logic into backup_do_cow, Vladimir Sementsov-Ogievskiy, 2019/08/10