qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [5574] fix bdrv_aio_read API breakage in qcow2 (Andrea Arca


From: Anthony Liguori
Subject: [Qemu-devel] [5574] fix bdrv_aio_read API breakage in qcow2 (Andrea Arcangeli)
Date: Fri, 31 Oct 2008 17:28:00 +0000

Revision: 5574
          http://svn.sv.gnu.org/viewvc/?view=rev&root=qemu&revision=5574
Author:   aliguori
Date:     2008-10-31 17:28:00 +0000 (Fri, 31 Oct 2008)

Log Message:
-----------
fix bdrv_aio_read API breakage in qcow2 (Andrea Arcangeli)

I noticed the qemu_aio_flush was doing nothing at all. And a flood of
cmd_writeb commands leading to a noop-invocation of qemu_aio_flush
were executed.

In short all 'memset;goto redo' places must be fixed to use the bh and
not to call the callback in the context of bdrv_aio_read or the
bdrv_aio_read model falls apart. Reading from qcow2 holes is possible
with phyisical readahead (kind of breada in linux buffer cache).

This is needed at least for scsi, ide is lucky (or it has been
band-aided against this API breakage by fixing the symptom and not the
real bug).

Same bug exists in qcow of course, can be fixed later as it's less
urgent.

Signed-off-by: Andrea Arcangeli <address@hidden>
Signed-off-by: Anthony Liguori <address@hidden>

Modified Paths:
--------------
    trunk/block-qcow2.c

Modified: trunk/block-qcow2.c
===================================================================
--- trunk/block-qcow2.c 2008-10-31 17:25:56 UTC (rev 5573)
+++ trunk/block-qcow2.c 2008-10-31 17:28:00 UTC (rev 5574)
@@ -1165,8 +1165,18 @@
     uint64_t cluster_offset;
     uint8_t *cluster_data;
     BlockDriverAIOCB *hd_aiocb;
+    QEMUBH *bh;
 } QCowAIOCB;
 
+static void qcow_aio_read_cb(void *opaque, int ret);
+static void qcow_aio_read_bh(void *opaque)
+{
+    QCowAIOCB *acb = opaque;
+    qemu_bh_delete(acb->bh);
+    acb->bh = NULL;
+    qcow_aio_read_cb(opaque, 0);
+}
+
 static void qcow_aio_read_cb(void *opaque, int ret)
 {
     QCowAIOCB *acb = opaque;
@@ -1182,7 +1192,6 @@
         return;
     }
 
- redo:
     /* post process the read buffer */
     if (!acb->cluster_offset) {
         /* nothing to do */
@@ -1223,12 +1232,30 @@
                 if (acb->hd_aiocb == NULL)
                     goto fail;
             } else {
-                goto redo;
+               if (acb->bh) {
+                   ret = -EIO;
+                   goto fail;
+               }
+               acb->bh = qemu_bh_new(qcow_aio_read_bh, acb);
+               if (!acb->bh) {
+                   ret = -EIO;
+                   goto fail;
+               }
+               qemu_bh_schedule(acb->bh);
             }
         } else {
             /* Note: in this case, no need to wait */
             memset(acb->buf, 0, 512 * acb->n);
-            goto redo;
+           if (acb->bh) {
+               ret = -EIO;
+               goto fail;
+           }
+           acb->bh = qemu_bh_new(qcow_aio_read_bh, acb);
+           if (!acb->bh) {
+               ret = -EIO;
+               goto fail;
+           }
+           qemu_bh_schedule(acb->bh);
         }
     } else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
         /* add AIO support for compressed blocks ? */
@@ -1236,7 +1263,16 @@
             goto fail;
         memcpy(acb->buf,
                s->cluster_cache + index_in_cluster * 512, 512 * acb->n);
-        goto redo;
+       if (acb->bh) {
+           ret = -EIO;
+           goto fail;
+       }
+       acb->bh = qemu_bh_new(qcow_aio_read_bh, acb);
+       if (!acb->bh) {
+           ret = -EIO;
+           goto fail;
+       }
+       qemu_bh_schedule(acb->bh);
     } else {
         if ((acb->cluster_offset & 511) != 0) {
             ret = -EIO;






reply via email to

[Prev in Thread] Current Thread [Next in Thread]