[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror
From: |
Vladimir Sementsov-Ogievskiy |
Subject: |
Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror |
Date: |
Fri, 11 Oct 2019 09:09:07 +0000 |
11.10.2019 11:58, Max Reitz wrote:
> On 11.10.19 10:33, Vladimir Sementsov-Ogievskiy wrote:
>> 04.10.2019 19:31, Max Reitz wrote:
>>> On 12.09.19 17:13, Vladimir Sementsov-Ogievskiy wrote:
>>>> Prior 9adc1cb49af8d do_sync_target_write had a bug: it reset aligned-up
>>>> region in the dirty bitmap, which means that we may not copy some bytes
>>>> and assume them copied, which actually leads to producing corrupted
>>>> target.
>>>>
>>>> So 9adc1cb49af8d forced dirty bitmap granularity to be
>>>> request_alignment for mirror-top filter, so we are not working with
>>>> unaligned requests. However forcing large alignment obviously decreases
>>>> performance of unaligned requests.
>>>>
>>>> This commit provides another solution for the problem: if unaligned
>>>> padding is already dirty, we can safely ignore it, as
>>>> 1. It's dirty, it will be copied by mirror_iteration anyway
>>>> 2. It's dirty, so skipping it now we don't increase dirtiness of the
>>>> bitmap and therefore don't damage "synchronicity" of the
>>>> write-blocking mirror.
>>>>
>>>> If unaligned padding is not dirty, we just write it, no reason to touch
>>>> dirty bitmap if we succeed (on failure we'll set the whole region
>>>> ofcourse, but we loss "synchronicity" on failure anyway).
>>>>
>>>> Note: we need to disable dirty_bitmap, otherwise we will not be able to
>>>> see in do_sync_target_write bitmap state before current operation. We
>>>> may of course check dirty bitmap before the operation in
>>>> bdrv_mirror_top_do_write and remember it, but we don't need active
>>>> dirty bitmap for write-blocking mirror anyway.
>>>>
>>>> New code-path is unused until the following commit reverts
>>>> 9adc1cb49af8d.
>>>>
>>>> Suggested-by: Denis V. Lunev <address@hidden>
>>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
>>>> ---
>>>> block/mirror.c | 39 ++++++++++++++++++++++++++++++++++++++-
>>>> 1 file changed, 38 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/block/mirror.c b/block/mirror.c
>>>> index d176bf5920..d192f6a96b 100644
>>>> --- a/block/mirror.c
>>>> +++ b/block/mirror.c
>>>> @@ -1204,6 +1204,39 @@ do_sync_target_write(MirrorBlockJob *job,
>>>> MirrorMethod method,
>>>> QEMUIOVector *qiov, int flags)
>>>> {
>>>> int ret;
>>>> + size_t qiov_offset = 0;
>>>> +
>>>> + if (!QEMU_IS_ALIGNED(offset, job->granularity) &&
>>>> + bdrv_dirty_bitmap_get(job->dirty_bitmap, offset)) {
>>>> + /*
>>>> + * Dirty unaligned padding
>>>> + * 1. It's already dirty, no damage to "actively_synced" if
>>>> we just
>>>> + * skip unaligned part.
>>>> + * 2. If we copy it, we can't reset corresponding bit in
>>>> + * dirty_bitmap as there may be some "dirty" bytes still
>>>> not
>>>> + * copied.
>>>> + * So, just ignore it.
>>>> + */
>>>> + qiov_offset = QEMU_ALIGN_UP(offset, job->granularity) -
>>>> offset;
>>>> + if (bytes <= qiov_offset) {
>>>> + /* nothing to do after shrink */
>>>> + return;
>>>> + }
>>>> + offset += qiov_offset;
>>>> + bytes -= qiov_offset;
>>>> + }
>>>> +
>>>> + if (!QEMU_IS_ALIGNED(offset + bytes, job->granularity) &&
>>>> + bdrv_dirty_bitmap_get(job->dirty_bitmap, offset + bytes - 1))
>>>> + {
>>>> + uint64_t tail = (offset + bytes) % job->granularity;
>>>> +
>>>> + if (bytes <= tail) {
>>>> + /* nothing to do after shrink */
>>>> + return;
>>>> + }
>>>> + bytes -= tail;
>>>> + }
>>>>
>>>> bdrv_reset_dirty_bitmap(job->dirty_bitmap, offset, bytes);
>>>>
>>>
>>> The bdrv_set_dirty_bitmap() in the error case below needs to use the
>>> original offset/bytes, I suppose.
>>
>> No, because we shrink tail only if it is already dirty. And we've locked the
>> region for in-flight operation, so nobody can clear the bitmap in a meantime.
>
> True. But wouldn’t it be simpler to understand to just use the original
> offsets?
>
>> But still, here is something to do:
>>
>> for not-shrinked tails, if any, we should:
>> 1. align down for reset
>> 2. align up for set on failure
>
> Well, the align up is done automatically, and I think that’s pretty
> self-explanatory.
Patch to make hbitmap_reset very strict (assert on analigned except for the
very end of the bitmap, if orig_size is unaligned) is queued by John.
So, I've made explicit alignment in v2.
>
>>>
>>> Apart from that, looks good to me.
>>>
>>> Max
>>>
>>
>>
>
>
--
Best regards,
Vladimir
- Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror, (continued)
- Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror, Vladimir Sementsov-Ogievskiy, 2019/10/03
- Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror, Max Reitz, 2019/10/04
- Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror, Vladimir Sementsov-Ogievskiy, 2019/10/04
- Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror, Max Reitz, 2019/10/04
- Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror, Vladimir Sementsov-Ogievskiy, 2019/10/04
- Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror, Max Reitz, 2019/10/04
- Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror, Vladimir Sementsov-Ogievskiy, 2019/10/04
Re: [PATCH 3/4] block/mirror: support unaligned write in active mirror, Max Reitz, 2019/10/04