qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 02/17] block: Introduce bdrv_driver_pwritev()


From: Eric Blake
Subject: Re: [Qemu-devel] [PATCH 02/17] block: Introduce bdrv_driver_pwritev()
Date: Wed, 27 Apr 2016 08:03:54 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0

On 04/27/2016 03:52 AM, Kevin Wolf wrote:
> This is a function that simply calls into the block driver for doing a
> write, providing the byte granularity interface we want to eventually
> have everywhere, and using whatever interface that driver supports.
> 
> This one is a bit more interesting that the version for reads: It adds
> support for .bdrv_co_writev_flags() everywhere, so that drivers
> implementing this function can drop .bdrv_co_writev() now.
> 
> Signed-off-by: Kevin Wolf <address@hidden>
> ---
>  block/io.c      | 51 ++++++++++++++++++++++++++++++++++++---------------
>  block/iscsi.c   |  8 --------
>  block/nbd.c     |  9 ---------
>  block/raw_bsd.c |  8 --------
>  4 files changed, 36 insertions(+), 40 deletions(-)
> 
>  
> +static int coroutine_fn bdrv_driver_pwritev(BlockDriverState *bs,
> +                                            uint64_t offset, uint64_t bytes,
> +                                            QEMUIOVector *qiov, int flags)
> +{
> +    BlockDriver *drv = bs->drv;
> +    int64_t sector_num = offset >> BDRV_SECTOR_BITS;
> +    unsigned int nb_sectors = bytes >> BDRV_SECTOR_BITS;
> +    int ret;
> +
> +    assert((offset & (BDRV_SECTOR_SIZE - 1)) == 0);
> +    assert((bytes & (BDRV_SECTOR_SIZE - 1)) == 0);
> +    assert((bytes >> BDRV_SECTOR_BITS) <= BDRV_REQUEST_MAX_SECTORS);
> +
> +    if (drv->bdrv_co_writev_flags) {
> +        ret = drv->bdrv_co_writev_flags(bs, sector_num, nb_sectors, qiov,
> +                                        flags);

Not for this patch, but should we be doing something like assert((flags
& ~drv->supported_write_flags) == 0)?

> +    } else {
> +        assert(drv->supported_write_flags == 0);
> +        ret = drv->bdrv_co_writev(bs, sector_num, nb_sectors, qiov);
> +    }
> +
> +    if (ret == 0 && (flags & BDRV_REQ_FUA) &&
> +        !(drv->supported_write_flags & BDRV_REQ_FUA))
> +    {
> +        ret = bdrv_co_flush(bs);

Unrelated to your patch here, but in my NBD work, I ran into the
situation where it would be nicer if drv->supported_write_flags were
dynamic.  That is, an NBD client can tell on a per-connection basis
whether the server supports NBD_FLAG_FUA, but because
supported_write_flags is a property of the driver, rather than a
callback that is a function of the bs, NBD has to reflect it back to the
block layer by advertising supported_write_flags == BDRV_REQ_FUA always,
and when connecting to a less-capable server has to manually repeat the
bdrv_co_flush(bs) fallback dance itself:

http://git.qemu.org/?p=qemu.git;a=blob;f=block/nbd.c;h=f7ea3b3608;hb=3123bd8e#l358

Maybe we should do a patch series that converts supported_write_flags to
be a function call that can have per-bs configuration, so that the NBD
client can be simplified by letting the block layer take care of the FUA
fallback.

> @@ -1215,23 +1247,12 @@ static int coroutine_fn 
> bdrv_aligned_pwritev(BlockDriverState *bs,
>      } else if (flags & BDRV_REQ_ZERO_WRITE) {
>          bdrv_debug_event(bs, BLKDBG_PWRITEV_ZERO);
>          ret = bdrv_co_do_write_zeroes(bs, sector_num, nb_sectors, flags);
> -    } else if (drv->bdrv_co_writev_flags) {
> -        bdrv_debug_event(bs, BLKDBG_PWRITEV);
> -        ret = drv->bdrv_co_writev_flags(bs, sector_num, nb_sectors, qiov,
> -                                        flags);

Should bdrv_co_do_write_zeroes also be folded into bdrv_driver_pwritev()?

But what you have looks sane enough for:
Reviewed-by: Eric Blake <address@hidden>

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]