[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH 1/3] qcow2: Catch !*host_offset for data allocation

From: Max Reitz
Subject: [Qemu-devel] [PATCH 1/3] qcow2: Catch !*host_offset for data allocation
Date: Thu, 7 Aug 2014 22:47:53 +0200

qcow2_alloc_cluster_offset() uses host_offset == 0 as "no preferred
offset" for the (data) cluster range to be allocated. However, this
offset is actually valid and may be allocated on images with a corrupted
refcount table or first refcount block.

In this case, the corruption prevention should normally catch that
write anyway (because it would overwrite the image header). But since 0
is a special value here, the function assumes that nothing has been
allocated at all which it asserts against.

Because this condition is not qemu's fault but rather that of a broken
image, it shouldn't throw an assertion but rather mark the image corrupt
and show an appropriate message, which this patch does by calling the
corruption check earlier than it would be called normally (before the

Signed-off-by: Max Reitz <address@hidden>
 block/qcow2-cluster.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c
index 4208dc0..c6af456 100644
--- a/block/qcow2-cluster.c
+++ b/block/qcow2-cluster.c
@@ -1106,6 +1106,17 @@ static int handle_alloc(BlockDriverState *bs, uint64_t 
         return 0;
+    /* !*host_offset would overwrite the image header and is reserved for "no
+     * host offset preferred". If 0 was a valid host offset, it'd trigger the
+     * following overlap check; do that now to avoid having an invalid value in
+     * *host_offset. */
+    if (!alloc_cluster_offset) {
+        ret = qcow2_pre_write_overlap_check(bs, 0, alloc_cluster_offset,
+                                            nb_clusters * s->cluster_size);
+        assert(ret < 0);
+        goto fail;
+    }
      * Save info needed for meta data update.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]