qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/3] m25p80: do not put iovec on the stack


From: Cédric Le Goater
Subject: Re: [Qemu-devel] [PATCH 1/3] m25p80: do not put iovec on the stack
Date: Tue, 28 Jun 2016 10:53:39 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Icedove/45.1.0

On 06/28/2016 10:39 AM, Paolo Bonzini wrote:
> When doing a read-modify-write cycle, QEMU uses the iovec after returning
> from blk_aio_pwritev.  m25p80 puts the iovec on the stack of blk_aio_pwritev's
> caller, which causes trouble in this case.  This has been a problem
> since commit 243e6f6 ("m25p80: Switch to byte-based block access",
> 2016-05-12) started doing writes at a smaller granularity than 512 bytes.
> In principle however it could have broken before when using -drive
> if=mtd,cache=none on a disk with 4K native sectors.

Ah ! Thanks. That was a problem I was seeing.   


I was thinking we could just do synchronous writes :

        
https://github.com/legoater/qemu/commit/aef3fe4db3be632077c581541fe30b4e36b5a6f7

and enable snapshotting like  : 

        
https://github.com/legoater/qemu/commit/b90ccab7873fd3538b47396ec7c3ae35c8e13270


Thanks, 

C. 


> Signed-off-by: Paolo Bonzini <address@hidden>
> ---
>  hw/block/m25p80.c | 23 ++++++++++++++---------
>  1 file changed, 14 insertions(+), 9 deletions(-)
> 
> diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
> index 09b4767..dd6714d 100644
> --- a/hw/block/m25p80.c
> +++ b/hw/block/m25p80.c
> @@ -446,6 +446,11 @@ static inline Manufacturer get_man(Flash *s)
>  
>  static void blk_sync_complete(void *opaque, int ret)
>  {
> +    QEMUIOVector *iov = opaque;
> +
> +    qemu_iovec_destroy(iov);
> +    g_free(iov);
> +
>      /* do nothing. Masters do not directly interact with the backing store,
>       * only the working copy so no mutexing required.
>       */
> @@ -453,31 +458,31 @@ static void blk_sync_complete(void *opaque, int ret)
>  
>  static void flash_sync_page(Flash *s, int page)
>  {
> -    QEMUIOVector iov;
> +    QEMUIOVector *iov = g_new(QEMUIOVector, 1);
>  
>      if (!s->blk || blk_is_read_only(s->blk)) {
>          return;
>      }
>  
> -    qemu_iovec_init(&iov, 1);
> -    qemu_iovec_add(&iov, s->storage + page * s->pi->page_size,
> +    qemu_iovec_init(iov, 1);
> +    qemu_iovec_add(iov, s->storage + page * s->pi->page_size,
>                     s->pi->page_size);
> -    blk_aio_pwritev(s->blk, page * s->pi->page_size, &iov, 0,
> -                    blk_sync_complete, NULL);
> +    blk_aio_pwritev(s->blk, page * s->pi->page_size, iov, 0,
> +                    blk_sync_complete, iov);
>  }
>  
>  static inline void flash_sync_area(Flash *s, int64_t off, int64_t len)
>  {
> -    QEMUIOVector iov;
> +    QEMUIOVector *iov = g_new(QEMUIOVector, 1);
>  
>      if (!s->blk || blk_is_read_only(s->blk)) {
>          return;
>      }
>  
>      assert(!(len % BDRV_SECTOR_SIZE));
> -    qemu_iovec_init(&iov, 1);
> -    qemu_iovec_add(&iov, s->storage + off, len);
> -    blk_aio_pwritev(s->blk, off, &iov, 0, blk_sync_complete, NULL);
> +    qemu_iovec_init(iov, 1);
> +    qemu_iovec_add(iov, s->storage + off, len);
> +    blk_aio_pwritev(s->blk, off, iov, 0, blk_sync_complete, iov);
>  }
>  
>  static void flash_erase(Flash *s, int offset, FlashCMD cmd)
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]