qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH v6 04/22] block: Introduce image file locking


From: Kevin Wolf
Subject: Re: [Qemu-block] [PATCH v6 04/22] block: Introduce image file locking
Date: Fri, 17 Jun 2016 13:34:35 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

Am 03.06.2016 um 10:48 hat Fam Zheng geschrieben:
> Block drivers can implement this new operation .bdrv_lockf to actually lock 
> the
> image in the protocol specific way.
> 
> Signed-off-by: Fam Zheng <address@hidden>
> ---
>  block.c                   | 57 
> +++++++++++++++++++++++++++++++++++++++++++++++
>  include/block/block.h     | 11 ++++++++-
>  include/block/block_int.h |  5 +++++
>  3 files changed, 72 insertions(+), 1 deletion(-)
> 
> diff --git a/block.c b/block.c
> index 736432f..4c2a3ff 100644
> --- a/block.c
> +++ b/block.c
> @@ -854,6 +854,50 @@ out:
>      g_free(gen_node_name);
>  }
>  
> +BdrvLockfCmd bdrv_get_locking_cmd(int flags)
> +{
> +    if (flags & BDRV_O_NO_LOCK) {
> +        return BDRV_LOCKF_UNLOCK;
> +    } else if (flags & BDRV_O_SHARED_LOCK) {
> +        return BDRV_LOCKF_SHARED;
> +    } else if (flags & BDRV_O_RDWR) {
> +        return BDRV_LOCKF_EXCLUSIVE;
> +    } else {
> +        return BDRV_LOCKF_SHARED;
> +    }
> +}

It feels a bit counterintuitive to use the very same operation for
"start of critical section, but don't actually lock" and "end of
critical section".

Is there a specific reason why you chose this instead of separate
.bdrv_lock/bdrv_unlock callbacks?

> +static int bdrv_lock_unlock_image_do(BlockDriverState *bs, BdrvLockfCmd cmd)
> +{
> +    int ret;
> +
> +    if (bs->cur_lock == cmd) {
> +        return 0;
> +    } else if (!bs->drv) {
> +        return -ENOMEDIUM;
> +    } else if (!bs->drv->bdrv_lockf) {
> +        return 0;
> +    }
> +    ret = bs->drv->bdrv_lockf(bs, cmd);
> +    if (ret == -ENOTSUP) {
> +        /* Handle it the same way as !bs->drv->bdrv_lockf */
> +        ret = 0;
> +    } else if (ret == 0) {
> +        bs->cur_lock = cmd;
> +    }
> +    return ret;
> +}

Okay, so the callback is supposed to support going from exclusive to
shared and vice versa? Noted for the next patches.

> +static int bdrv_lock_image(BlockDriverState *bs)
> +{
> +    return bdrv_lock_unlock_image_do(bs, 
> bdrv_get_locking_cmd(bs->open_flags));
> +}
> +
> +static int bdrv_unlock_image(BlockDriverState *bs)
> +{
> +    return bdrv_lock_unlock_image_do(bs, BDRV_LOCKF_UNLOCK);
> +}
> +
>  static QemuOptsList bdrv_runtime_opts = {
>      .name = "bdrv_common",
>      .head = QTAILQ_HEAD_INITIALIZER(bdrv_runtime_opts.head),
> @@ -1003,6 +1047,14 @@ static int bdrv_open_common(BlockDriverState *bs, 
> BdrvChild *file,
>          goto free_and_fail;
>      }
>  
> +    if (!(open_flags & (BDRV_O_NO_LOCK | BDRV_O_INACTIVE))) {
> +        ret = bdrv_lock_image(bs);
> +        if (ret) {
> +            error_setg(errp, "Failed to lock image");
> +            goto free_and_fail;
> +        }
> +    }
> +
>      ret = refresh_total_sectors(bs, bs->total_sectors);
>      if (ret < 0) {
>          error_setg_errno(errp, -ret, "Could not refresh total sector count");
> @@ -2164,6 +2216,7 @@ static void bdrv_close(BlockDriverState *bs)
>      if (bs->drv) {
>          BdrvChild *child, *next;
>  
> +        bdrv_unlock_image(bs);
>          bs->drv->bdrv_close(bs);
>          bs->drv = NULL;
>  
> @@ -3187,6 +3240,9 @@ void bdrv_invalidate_cache(BlockDriverState *bs, Error 
> **errp)
>          error_setg_errno(errp, -ret, "Could not refresh total sector count");
>          return;
>      }
> +    if (!(bs->open_flags & BDRV_O_NO_LOCK)) {
> +        bdrv_lock_image(bs);
> +    }
>  }

I think the if is unnecessary, bdrv_lock_image() already looks at
BDRV_O_NO_LOCK.

>  void bdrv_invalidate_cache_all(Error **errp)
> @@ -3229,6 +3285,7 @@ static int bdrv_inactivate_recurse(BlockDriverState *bs,
>      }
>  
>      if (setting_flag) {
> +        ret = bdrv_unlock_image(bs);
>          bs->open_flags |= BDRV_O_INACTIVE;
>      }
>      return 0;

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]