qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4 2/6] add basic backup support to block driver


From: Dietmar Maurer
Subject: Re: [Qemu-devel] [PATCH v4 2/6] add basic backup support to block driver
Date: Thu, 21 Feb 2013 05:23:30 +0000

> > +    bit = cluster_num % BITS_PER_LONG;
> > +    val = job->bitmap[idx];
> > +    if (dirty) {
> > +        if (!(val & (1UL << bit))) {
> > +            val |= 1UL << bit;
> > +        }
> > +    } else {
> > +        if (val & (1UL << bit)) {
> > +            val &= ~(1UL << bit);
> > +        }
> > +    }
> 
> The set and clear can be unconditional.  No need to check that the bit is 
> clear
> before setting it and vice versa.

Thanks, will change.
 
> > +    job->bitmap[idx] = val;
> > +}
> > +
> > +static int backup_in_progress_count;
> > +
> > +static int coroutine_fn backup_do_cow(BlockDriverState *bs,
> > +                                      int64_t sector_num, int
> > +nb_sectors) {
> > +    assert(bs);
> > +    BackupBlockJob *job = (BackupBlockJob *)bs->job;
> > +    assert(job);
> > +
> > +    BlockDriver *drv = bs->drv;
> > +    struct iovec iov;
> > +    QEMUIOVector bounce_qiov;
> > +    void *bounce_buffer = NULL;
> > +    int ret = 0;
> > +
> > +    backup_in_progress_count++;
> > +
> > +    int64_t start, end;
> > +
> > +    start = sector_num / BACKUP_BLOCKS_PER_CLUSTER;
> > +    end = (sector_num + nb_sectors + BACKUP_BLOCKS_PER_CLUSTER - 1) /
> > +        BACKUP_BLOCKS_PER_CLUSTER;
> > +
> > +    DPRINTF("brdv_co_backup_cow enter %s C%zd %zd %d\n",
> > +            bdrv_get_device_name(bs), start, sector_num, nb_sectors);
> 
> Please use PRId64 for int64_t printf arguments in this function.  "z" is for 
> size_t,
> which will be wrong on 32-bit hosts.

OK

> > +static void coroutine_fn backup_run(void *opaque) {
> > +    BackupBlockJob *job = opaque;
> > +    BlockDriverState *bs = job->common.bs;
> > +    assert(bs);
> > +
> > +    int64_t start, end;
> > +
> > +    start = 0;
> > +    end = (bs->total_sectors + BACKUP_BLOCKS_PER_CLUSTER - 1) /
> > +        BACKUP_BLOCKS_PER_CLUSTER;
> > +
> > +    DPRINTF("backup_run start %s %zd %zd\n", bdrv_get_device_name(bs),
> > +            start, end);
> > +
> > +    int ret = 0;
> > +
> > +    for (; start < end; start++) {
> > +        if (block_job_is_cancelled(&job->common)) {
> > +            ret = -1;
> > +            break;
> > +        }
> > +
> > +        /* we need to yield so that qemu_aio_flush() returns.
> > +         * (without, VM does not reboot)
> > +        * Note: use 1000 instead of 0 (0 prioritize this task too
> > + much)
> 
> indentation
> 
> What does "0 prioritize this task too much" mean?  If no rate limit has been 
> set
> the job should run at full speed.  We should not hardcode arbitrary delays 
> like
> 1000.

The VM itself gets somehow slower during backup - do not know why. As 
workaround sleep 1000 works.

> > +         */
> > +        if (job->common.speed) {
> > +            uint64_t delay_ns = ratelimit_calculate_delay(
> > +                &job->limit, job->sectors_read);
> > +            job->sectors_read = 0;
> > +            block_job_sleep_ns(&job->common, rt_clock, delay_ns);
> > +        } else {
> > +            block_job_sleep_ns(&job->common, rt_clock, 1000);
> > +        }
> > +
> > +        if (block_job_is_cancelled(&job->common)) {
> > +            ret = -1;
> > +            break;
> > +        }
> > +
> > +        if (backup_get_bitmap(job, start)) {
> > +            continue; /* already copied */
> > +        }
> > +
> > +        DPRINTF("backup_run loop C%zd\n", start);
> > +
> > +        /**
> > +         * This triggers a cluster copy
> > +         * Note: avoid direct call to brdv_co_backup_cow, because
> > +         * this does not call tracked_request_begin()
> > +         */
> > +        ret = bdrv_co_backup(bs, start*BACKUP_BLOCKS_PER_CLUSTER, 1);
> > +        if (ret < 0) {
> > +            break;
> > +        }
> > +        /* Publish progress */
> > +        job->common.offset += BACKUP_CLUSTER_SIZE;
> > +    }
> > +
> > +    while (backup_in_progress_count > 0) {
> > +        DPRINTF("backup_run backup_in_progress_count != 0 (%d)",
> > +                backup_in_progress_count);
> > +        block_job_sleep_ns(&job->common, rt_clock, 10000);
> > +
> > +    }
> 
> Sleeping is bad.  
> You can use a coroutine rwlock: backup_do_cow() takes a read
> lock and backup_run() takes a write lock to ensure all pending
> backup_do_cow() calls have completed.
> 
> This also gets rid of the backup_in_progress_count global.

OK, will do.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]