qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH v2] block/vdi: Add locking for parallel requests


From: Max Reitz
Subject: Re: [Qemu-block] [PATCH v2] block/vdi: Add locking for parallel requests
Date: Fri, 27 Feb 2015 13:09:07 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0

On 2015-02-27 at 12:42, Paolo Bonzini wrote:

On 27/02/2015 15:05, Max Reitz wrote:
Concurrently modifying the bmap does not seem to be a good idea; this patch adds
a lock for it. See https://bugs.launchpad.net/qemu/+bug/1422307 for what
can go wrong without.

Cc: qemu-stable <address@hidden>
Signed-off-by: Max Reitz <address@hidden>
---
v2:
- Make the mutex cover vdi_co_write() completely [Kevin]
- Add a TODO comment [Kevin]
I think I know what the bug is.  Suppose you have two concurrent writes
to a non-allocated block, one at 16K...32K (in bytes) and one at
32K...48K.  The first write is enlarged to contain zeros, the second is
not.  Then you have two writes in flight:

       0       zeros
       ...     zeros
       16K     data1
       ...     data1
       32K     zeros      data2
       ...     zeros      data2
       48K     zeros
       ...     zeros
       64K

And the contents of 32K...48K are undefined.  If the above diagnosis is
correct, I'm not even sure why Max's v1 patch worked...

Maybe that's an issue, too; but the test case I sent out does 1 MB requests (and it fails), so this shouldn't matter there.

An optimized fix could be to use a CoRwLock, then:

Yes, I'm actually already working on that.

Max

- take it shared (read) around the write in the
"VDI_IS_ALLOCATED(bmap_entry)" path

- take it exclusive (write) around the write in the
"!VDI_IS_ALLOCATED(bmap_entry)" path

Paolo

---
  block/vdi.c | 11 +++++++++++
  1 file changed, 11 insertions(+)

diff --git a/block/vdi.c b/block/vdi.c
index 74030c6..f5f42ef 100644
--- a/block/vdi.c
+++ b/block/vdi.c
@@ -51,6 +51,7 @@
#include "qemu-common.h"
  #include "block/block_int.h"
+#include "block/coroutine.h"
  #include "qemu/module.h"
  #include "migration/migration.h"
@@ -196,6 +197,8 @@ typedef struct {
      /* VDI header (converted to host endianness). */
      VdiHeader header;
+ CoMutex bmap_lock;
+
      Error *migration_blocker;
  } BDRVVdiState;
@@ -498,6 +501,8 @@ static int vdi_open(BlockDriverState *bs, QDict *options, int flags,
          goto fail_free_bmap;
      }
+ qemu_co_mutex_init(&s->bmap_lock);
+
      /* Disable migration when vdi images are used */
      error_set(&s->migration_blocker,
                QERR_BLOCK_FORMAT_FEATURE_NOT_SUPPORTED,
@@ -607,6 +612,9 @@ static int vdi_co_write(BlockDriverState *bs,
logout("\n"); + /* TODO: Figure out why this is necessary */
+    qemu_co_mutex_lock(&s->bmap_lock);
+
      while (ret >= 0 && nb_sectors > 0) {
          block_index = sector_num / s->block_sectors;
          sector_in_block = sector_num % s->block_sectors;
@@ -656,6 +664,7 @@ static int vdi_co_write(BlockDriverState *bs,
logout("finished data write\n");
      if (ret < 0) {
+        qemu_co_mutex_unlock(&s->bmap_lock);
          return ret;
      }
@@ -674,6 +683,7 @@ static int vdi_co_write(BlockDriverState *bs,
          block = NULL;
if (ret < 0) {
+            qemu_co_mutex_unlock(&s->bmap_lock);
              return ret;
          }
@@ -690,6 +700,7 @@ static int vdi_co_write(BlockDriverState *bs,
          ret = bdrv_write(bs->file, offset, base, n_sectors);
      }
+ qemu_co_mutex_unlock(&s->bmap_lock);
      return ret;
  }




reply via email to

[Prev in Thread] Current Thread [Next in Thread]