qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v9 03/13] block/backup: introduce BlockCopyState


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [Qemu-devel] [PATCH v9 03/13] block/backup: introduce BlockCopyState
Date: Thu, 29 Aug 2019 10:52:56 +0000

Thanks for reviewing!

28.08.2019 18:59, Max Reitz wrote:
> On 26.08.19 18:13, Vladimir Sementsov-Ogievskiy wrote:
>> Split copying code part from backup to "block-copy", including separate
>> state structure and function renaming. This is needed to share it with
>> backup-top filter driver in further commits.
>>
>> Notes:
>>
>> 1. As BlockCopyState keeps own BlockBackend objects, remaining
> 
> I suppose these should be BdrvChild objects at some point, but doing it
> now would just mean effectively duplicating code from block-backend.c.
> (“now” = before we have a backup-top filter to attach the children to.)

How much is it bad to not do it, but leave them to be block-backends in 
block-copy
state? They'll connected anyway through the job, as they all are in job.nodes.

We have block-backends in jobs currently, is it bad?

> 
>> job->common.blk users only use it to get bs by blk_bs() call, so clear
>> job->commen.blk permissions set in block_job_create.
>>
>> 2. Rename s/initializing_bitmap/skip_unallocated/ to sound a bit better
>> as interface to BlockCopyState
>>
>> 3. Split is not very clean: there left some duplicated fields, backup
> 
> Are there any but cluster_size and len (and source, in a sense)?

Seems no more

> 
>> code uses some BlockCopyState fields directly, let's postpone it for
>> further improvements and keep this comment simpler for review.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
>> ---
>>   block/backup.c     | 324 +++++++++++++++++++++++++++------------------
>>   block/trace-events |  12 +-
>>   2 files changed, 200 insertions(+), 136 deletions(-)
>>
>> diff --git a/block/backup.c b/block/backup.c
>> index 13a1d80157..f52ac622e0 100644
>> --- a/block/backup.c
>> +++ b/block/backup.c
>> @@ -35,12 +35,35 @@ typedef struct CowRequest {
>>       CoQueue wait_queue; /* coroutines blocked on this request */
>>   } CowRequest;
>>   
>> +/*
>> + * ProgressCallbackFunc
>> + *
>> + * Called when some progress is done in context of BlockCopyState:
>> + *  1. When some bytes copied, called with @bytes > 0.
>> + *  2. When some bytes resetted from copy_bitmap, called with @bytes = 0 
>> (user
> 
> *reset
> 
>> + *     may recalculate remaining bytes from copy_bitmap dirty count.
>> + */
>> +typedef void (*ProgressCallbackFunc)(int64_t bytes, void *opaque);
> 
> Maybe there should be two callbacks instead, one for “We’ve actively
> made progress” (bytes > 0) and one for “The expected length has changed”
> (bytes == 0)?

I thought, that there are already too many parameters in block_copy_state_new().
But I agree with you, as actually it led to two callbacks in a one with just
if-else to distinguish them. Will do.

> 
>> +typedef struct BlockCopyState {
>> +    BlockBackend *source;
>> +    BlockBackend *target;
>> +    BdrvDirtyBitmap *copy_bitmap;
>> +    int64_t cluster_size;
>> +    bool use_copy_range;
>> +    int64_t copy_range_size;
>> +    uint64_t len;
>> +
>> +    BdrvRequestFlags write_flags;
>> +    bool skip_unallocated;
> 
> The rename seems reasonable, although I think this should get a comment,
> because it doesn’t mean just to skip unallocated clusters; it also means
> to clear unallocated clusters from the bitmap.
> 
>> +
>> +    ProgressCallbackFunc progress_callback;
>> +    void *progress_opaque;
>> +} BlockCopyState;
>> +
>>   typedef struct BackupBlockJob {
>>       BlockJob common;
>> -    BlockBackend *target;
>>   
>>       BdrvDirtyBitmap *sync_bitmap;
>> -    BdrvDirtyBitmap *copy_bitmap;
>>   
>>       MirrorSyncMode sync_mode;
>>       BitmapSyncMode bitmap_mode;
> 
> [...]
> 
>> @@ -99,9 +118,83 @@ static void cow_request_end(CowRequest *req)
>>       qemu_co_queue_restart_all(&req->wait_queue);
>>   }
>>   
>> +static void block_copy_state_free(BlockCopyState *s)
>> +{
>> +    if (!s) {
>> +        return;
>> +    }
>> +
>> +    bdrv_release_dirty_bitmap(blk_bs(s->source), s->copy_bitmap);
>> +    blk_unref(s->source);
>> +    s->source = NULL;
>> +    blk_unref(s->target);
>> +    s->target = NULL;
> 
> I’m not quite sure why you NULL these pointers when you free the whole
> object next anyway.

it is for backup_drain, I'm afraid of some yield during blk_unref (and seems 
it's unsafe
anyway, as I zero reference after calling blk_unref). Anyway,
backup_drain will be dropped in "[PATCH v3] job: drop job_drain", I'll drop
"= NULL" here now and workaround backup_drain in backup_clean with corresponding
comment.

> 
>> +    g_free(s);
>> +}
>> +
>> +static BlockCopyState *block_copy_state_new(
>> +        BlockDriverState *source, BlockDriverState *target,
>> +        int64_t cluster_size, BdrvRequestFlags write_flags,
>> +        ProgressCallbackFunc progress_callback, void *progress_opaque,
>> +        Error **errp)
>> +{
>> +    BlockCopyState *s;
>> +    int ret;
>> +    uint64_t no_resize = BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE |
>> +                         BLK_PERM_WRITE_UNCHANGED | BLK_PERM_GRAPH_MOD;
>> +    BdrvDirtyBitmap *copy_bitmap =
>> +            bdrv_create_dirty_bitmap(source, cluster_size, NULL, errp);
>> +
> 
> This probably were easier to read if you didn’t initialize copy_bitmap
> with the bdrv_create_dirty_bitmap() call but instead moved that call
> right above the if () here (it still fits on a single line).
> 
>> +    if (!copy_bitmap) {
>> +        return NULL;
>> +    }
>> +    bdrv_disable_dirty_bitmap(copy_bitmap);
>> +
>> +    s = g_new0(BlockCopyState, 1);
> 
> With the following compound literal, you don’t need to allocate
> zero-initialized memory here.
> 
>> +    *s = (BlockCopyState) {
>> +        .source = blk_new(bdrv_get_aio_context(source),
>> +                          BLK_PERM_CONSISTENT_READ, no_resize),
>> +        .target = blk_new(bdrv_get_aio_context(target),
>> +                          BLK_PERM_WRITE, no_resize),
> 
> Maybe we want to assert that source’s and target’s context are the same?

Context may change, so no reason to check it here. It'd better be asserted in
block_copy() before copying, I wanted to do it, but forget, will add.

> 
>> +        .copy_bitmap = copy_bitmap,
>> +        .cluster_size = cluster_size,
>> +        .len = bdrv_dirty_bitmap_size(copy_bitmap),
>> +        .write_flags = write_flags,
>> +        .use_copy_range = !(write_flags & BDRV_REQ_WRITE_COMPRESSED),
>> +        .progress_callback = progress_callback,
>> +        .progress_opaque = progress_opaque,
>> +    };
>> +
>> +    s->copy_range_size = QEMU_ALIGN_UP(MIN(blk_get_max_transfer(s->source),
>> +                                           blk_get_max_transfer(s->target)),
>> +                                       s->cluster_size),
> 
> Nice simplification. >
>> +
>> +    blk_set_disable_request_queuing(s->source, true);
>> +    blk_set_allow_aio_context_change(s->source, true);
>> +    blk_set_disable_request_queuing(s->target, true);
>> +    blk_set_allow_aio_context_change(s->target, true);
> 
> Hm.  Doesn’t creating new BBs here mean that we have to deal with the
> fallout of changing the AioContext on either of them somewhere?

In backup context, backup job is responsible for keeping source and target bs
in same context, so I think allowing blk to change aio context and assert in
block_copy() that context is the same should be enough for now.

I'll add a comment on this here.

> 
> [...]
> 
>> @@ -449,8 +542,8 @@ static void backup_drain(BlockJob *job)
>>       /* Need to keep a reference in case blk_drain triggers execution
>>        * of backup_complete...
>>        */
>> -    if (s->target) {
>> -        BlockBackend *target = s->target;
>> +    if (s->bcs && s->bcs->target) {
> 
> bcs->target should always be non-NULL, shouldn’t it?

But if we intersect with some yield in cleanu-up procedure... Anyway, 
backup_drain
will be dropped soon I hope.

> 
>> +        BlockBackend *target = s->bcs->target;
>>           blk_ref(target);
>>           blk_drain(target);
>>           blk_unref(target);
> 
> [...]
> 
>> @@ -730,57 +821,34 @@ BlockJob *backup_job_create(const char *job_id, 
>> BlockDriverState *bs,
> 
> [...]
> 
>> -    /*
>> -     * Set write flags:
>> -     * 1. Detect image-fleecing (and similar) schemes
>> -     * 2. Handle compression
>> -     */
>> -    job->write_flags =
>> -        (bdrv_chain_contains(target, bs) ? BDRV_REQ_SERIALISING : 0) |
>> -        (compress ? BDRV_REQ_WRITE_COMPRESSED : 0);
>> +    job->bcs = block_copy_state_new(
>> +            bs, target, cluster_size,
>> +            /*
>> +             * Set write flags:
>> +             * 1. Detect image-fleecing (and similar) schemes
>> +             * 2. Handle compression
>> +             */
>> +            (bdrv_chain_contains(target, bs) ? BDRV_REQ_SERIALISING : 0) |
>> +            (compress ? BDRV_REQ_WRITE_COMPRESSED : 0),
> 
> This is a bit hard to read.  Why not add a dedicated variable for it?
> 
>> +            backup_progress_callback, job, errp);
>> +    if (!job->bcs) {
>> +        goto error;
>> +    }
>>   
>>       job->cluster_size = cluster_size;
>> -    job->copy_bitmap = copy_bitmap;
>> -    copy_bitmap = NULL;
>> -    job->use_copy_range = !compress; /* compression isn't supported for it 
>> */
>> -    job->copy_range_size = 
>> MIN_NON_ZERO(blk_get_max_transfer(job->common.blk),
>> -                                        blk_get_max_transfer(job->target));
>> -    job->copy_range_size = MAX(job->cluster_size,
>> -                               QEMU_ALIGN_UP(job->copy_range_size,
>> -                                             job->cluster_size));
>>   
>>       /* Required permissions are already taken with target's blk_new() */
>>       block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL,
> 
> [...]
> 
>> diff --git a/block/trace-events b/block/trace-events
>> index 04209f058d..ad1454f539 100644
>> --- a/block/trace-events
>> +++ b/block/trace-events
>> @@ -40,12 +40,12 @@ mirror_yield_in_flight(void *s, int64_t offset, int 
>> in_flight) "s %p offset %" P
>>   # backup.c
>>   backup_do_cow_enter(void *job, int64_t start, int64_t offset, uint64_t 
>> bytes) "job %p start %" PRId64 " offset %" PRId64 " bytes %" PRIu64
>>   backup_do_cow_return(void *job, int64_t offset, uint64_t bytes, int ret) 
>> "job %p offset %" PRId64 " bytes %" PRIu64 " ret %d"
>> -backup_do_cow_skip(void *job, int64_t start) "job %p start %"PRId64
>> -backup_do_cow_skip_range(void *job, int64_t start, uint64_t bytes) "job %p 
>> start %"PRId64" bytes %"PRId64
>> -backup_do_cow_process(void *job, int64_t start) "job %p start %"PRId64
>> -backup_do_cow_read_fail(void *job, int64_t start, int ret) "job %p start 
>> %"PRId64" ret %d"
>> -backup_do_cow_write_fail(void *job, int64_t start, int ret) "job %p start 
>> %"PRId64" ret %d"
>> -backup_do_cow_copy_range_fail(void *job, int64_t start, int ret) "job %p 
>> start %"PRId64" ret %d"
>> +block_copy_skip(void *job, int64_t start) "job %p start %"PRId64
>> +block_copy_skip_range(void *job, int64_t start, uint64_t bytes) "job %p 
>> start %"PRId64" bytes %"PRId64
>> +block_copy_process(void *job, int64_t start) "job %p start %"PRId64
>> +block_copy_with_bounce_buffer_read_fail(void *job, int64_t start, int ret) 
>> "job %p start %"PRId64" ret %d"
>> +block_copy_with_bounce_buffer_write_fail(void *job, int64_t start, int ret) 
>> "job %p start %"PRId64" ret %d"
>> +block_copy_with_offload_fail(void *job, int64_t start, int ret) "job %p 
>> start %"PRId64" ret %d"
> 
> The pointer is no longer a job pointer, though.
> 
> Max
> 
>>   
>>   # ../blockdev.c
>>   qmp_block_job_cancel(void *job) "job %p"
>>
> 
> 


-- 
Best regards,
Vladimir

reply via email to

[Prev in Thread] Current Thread [Next in Thread]