qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 07/11] block: optimization blk_pwrite_compressed


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH 07/11] block: optimization blk_pwrite_compressed()
Date: Wed, 15 Jun 2016 11:22:10 +0100
User-agent: Mutt/1.6.1 (2016-04-27)

On Mon, Jun 13, 2016 at 02:16:08PM -0600, Eric Blake wrote:
> On 06/13/2016 07:11 AM, Stefan Hajnoczi wrote:
> > On Tue, May 31, 2016 at 12:15:26PM +0300, Denis V. Lunev wrote:
> >> diff --git a/include/sysemu/block-backend.h 
> >> b/include/sysemu/block-backend.h
> >> index 57df069..3d7b446 100644
> >> --- a/include/sysemu/block-backend.h
> >> +++ b/include/sysemu/block-backend.h
> >> @@ -205,6 +205,9 @@ int coroutine_fn blk_co_pwrite_zeroes(BlockBackend 
> >> *blk, int64_t offset,
> >>                                        int count, BdrvRequestFlags flags);
> >>  int blk_pwrite_compressed(BlockBackend *blk, int64_t offset, const void 
> >> *buf,
> >>                            int count);
> >> +int coroutine_fn blk_co_pwritev_compressed(BlockBackend *blk, int64_t 
> >> offset,
> >> +                                           unsigned int bytes,
> >> +                                           QEMUIOVector *qiov);
> > 
> > Perhaps blk_co_pwritev_compressed() isn't necessary at all since
> > blk_co_pwritev() already exists and has the flags argument:
> > 
> > int coroutine_fn blk_co_pwritev(BlockBackend *blk, int64_t offset,
> >                                unsigned int bytes, QEMUIOVector *qiov,
> >                                BdrvRequestFlags flags);
> 
> Are you arguing that we should have a new BDRV_REQ_COMPRESSED flag that
> can be set in .supported_write_flags for drivers that know how to do a
> compressed write?

Never mind, it's too much noise and out of scope for this series.

I'm fine with blk_co_pwritev_compressed().

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]