qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH v4 12/15] hbitmap: serialization


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [Qemu-block] [PATCH v4 12/15] hbitmap: serialization
Date: Mon, 14 Mar 2016 14:58:41 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.8.0

On 11.03.2016 19:27, Max Reitz wrote:
On 08.03.2016 05:45, Fam Zheng wrote:
From: Vladimir Sementsov-Ogievskiy <address@hidden>

Functions to serialize / deserialize(restore) HBitmap. HBitmap should be
saved to linear sequence of bits independently of endianness and bitmap
array element (unsigned long) size. Therefore Little Endian is chosen.

These functions are appropriate for dirty bitmap migration, restoring
the bitmap in several steps is available. To save performance, every
step writes only the last level of the bitmap. All other levels are
restored by hbitmap_deserialize_finish() as a last step of restoring.
So, HBitmap is inconsistent while restoring.

Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
[Fix left shift operand to 1UL; add "finish" parameter. - Fam]
Signed-off-by: Fam Zheng <address@hidden>
---
  include/qemu/hbitmap.h |  79 ++++++++++++++++++++++++++++
  util/hbitmap.c         | 137 +++++++++++++++++++++++++++++++++++++++++++++++++
  2 files changed, 216 insertions(+)
[...]

diff --git a/util/hbitmap.c b/util/hbitmap.c
index 2d3d04c..5f02c17 100644
--- a/util/hbitmap.c
+++ b/util/hbitmap.c
@@ -395,6 +395,143 @@ bool hbitmap_get(const HBitmap *hb, uint64_t item)
      return (hb->levels[HBITMAP_LEVELS - 1][pos >> BITS_PER_LEVEL] & bit) != 0;
  }
+uint64_t hbitmap_serialization_granularity(const HBitmap *hb)
+{
+    /* Require at least 64 bit granularity to be safe on both 64 bit and 32 bit
+     * hosts. */
+    return 64 << hb->granularity;
+}
+
+/* Start should be aligned to serialization granularity, chunk size should be
+ * aligned to serialization granularity too, except for last chunk.
+ */
+static void serialization_chunk(const HBitmap *hb,
+                                uint64_t start, uint64_t count,
+                                unsigned long **first_el, size_t *el_count)
+{
+    uint64_t last = start + count - 1;
+    uint64_t gran = hbitmap_serialization_granularity(hb);
+
+    assert((start & (gran - 1)) == 0);
+    assert((last >> hb->granularity) < hb->size);
+    if ((last >> hb->granularity) != hb->size - 1) {
+        assert((count & (gran - 1)) == 0);
+    }
+
+    start = (start >> hb->granularity) >> BITS_PER_LEVEL;
+    last = (last >> hb->granularity) >> BITS_PER_LEVEL;
+
+    *first_el = &hb->levels[HBITMAP_LEVELS - 1][start];
+    *el_count = last - start + 1;
+}
+
+uint64_t hbitmap_serialization_size(const HBitmap *hb,
+                                    uint64_t start, uint64_t count)
+{
+    uint64_t el_count;
+    unsigned long *cur;
+
+    if (!count) {
+        return 0;
+    }
+    serialization_chunk(hb, start, count, &cur, &el_count);
+
+    return el_count * sizeof(unsigned long);
+}
+
+void hbitmap_serialize_part(const HBitmap *hb, uint8_t *buf,
+                            uint64_t start, uint64_t count)
+{
+    uint64_t el_count;
+    unsigned long *cur, *end;
+
+    if (!count) {
+        return;
+    }
+    serialization_chunk(hb, start, count, &cur, &el_count);
+    end = cur + el_count;
+
+    while (cur != end) {
+        unsigned long el =
+            (BITS_PER_LONG == 32 ? cpu_to_le32(*cur) : cpu_to_le64(*cur));
Looks a bit fishy, but I can't come up with anything better.

(Other than adding cpu_to_leul(); we already do have leul_to_cpu(), so
that wouldn't be too far off.)

+
+        memcpy(buf, &el, sizeof(el));
One could have used cpu_to_le32/64w((uint32/64_t *)buf, *cur); instead.

Maybe I'd like the following better:

#if BITS_PER_LONG == 32
cpu_to_le32w((uint32_t *)buf, *cur);
#elif BITS_PER_LONG == 64
cpu_to_le64w((uint64_t *)buf, *cur);
#else
#error Unknown long size
#endif

Or just

#else /* BITS_PER_LONG == 64 */

instead of the #elif. I think that's safe to assume.

I think (BITS_PER_LONG == 32 ? .. : ..) should be optimised away by compiler in any case.. However adding cpu_to_leul should be better.



+        buf += sizeof(el);
+        cur++;
+    }
+}
+
+void hbitmap_deserialize_part(HBitmap *hb, uint8_t *buf,
+                              uint64_t start, uint64_t count,
+                              bool finish)
+{
+    uint64_t el_count;
+    unsigned long *cur, *end;
+
+    if (!count) {
+        return;
+    }
+    serialization_chunk(hb, start, count, &cur, &el_count);
+    end = cur + el_count;
+
+    while (cur != end) {
+        memcpy(cur, buf, sizeof(*cur));
+
+        if (BITS_PER_LONG == 32) {
+            le32_to_cpus((uint32_t *)cur);
+        } else {
+            le64_to_cpus((uint64_t *)cur);
+        }
Here, I'd definitely like that variant better, i.e.

#if BITS_PER_LONG == 32
le32_to_cpuw(cur, *(uint32_t *)buf);
#else /* BITS_PER_LONG == 64 */
le64_to_cpuw(cur, *(uint64_t *)buf);
#endif

Unless a language lawyer NACKs this because the pointer cast violates
strict aliasing.

If so, I still strongly recommend replacing the if by an #if, not least
because this saves us the pointer cast on cur.

(Or does it? Maybe one still needs to explicitly cast an unsigned long *
to uint32_t * even if both have the same size. However, in that case we
can still do *cur = le32/64_to_cpu(*cur). The above would then become:

#if BITS_PER_LONG == 32
*cur = le32_to_cpu(*(uint32_t *)buf);
#else /* BITS_PER_LONG == 64 */
*cur = le64_to_cpu(*(uint64_t *)buf);
#endif

)

+
+        buf += sizeof(unsigned long);
+        cur++;
+    }
+    if (finish) {
+        hbitmap_deserialize_finish(hb);
+    }
+}
+
+void hbitmap_deserialize_zeroes(HBitmap *hb, uint64_t start, uint64_t count,
+                                bool finish)
+{
+    uint64_t el_count;
+    unsigned long *first;
+
+    if (!count) {
+        return;
+    }
+    serialization_chunk(hb, start, count, &first, &el_count);
+
+    memset(first, 0, el_count * sizeof(unsigned long));
+    if (finish) {
+        hbitmap_deserialize_finish(hb);
+    }
+}
+
+void hbitmap_deserialize_finish(HBitmap *bitmap)
+{
+    int64_t i, size, prev_size;
+    int lev;
+
+    /* restore levels starting from penultimate to zero level, assuming
+     * that the last level is ok */
+    size = MAX((bitmap->size + BITS_PER_LONG - 1) >> BITS_PER_LEVEL, 1);
+    for (lev = HBITMAP_LEVELS - 1; lev-- > 0; ) {
I'm always amazed of how much this notation (x-- > 0, or x --> 0, even
worse) confuses me. Or rather I'm always amazed of how much it confuses
me and how much other people love it (as opposed to me, who's rather
unhappy every time he doesn't use all of the three "parameters" for a
for loop).

This time I wondered why we start at HBITMAP_LEVELS - 1 -- that's the
ultimate and not the penultimate level after all. But of course, the
loop condition is evaluated before the loop is executed for the first
time, so we will indeed start at HBITMAP_LEVELS - 2.

So, yeah, this is correct.

This strange notation is used in several places in hbitmap.c, so, I've reused it) I don't like it, but to fix it it should be fixed everywhere in the file as a separate patch I think..


+        prev_size = size;
+        size = MAX((size + BITS_PER_LONG - 1) >> BITS_PER_LEVEL, 1);
+        memset(bitmap->levels[lev], 0, size * sizeof(unsigned long));
+
+        for (i = 0; i < prev_size; ++i) {
I hate using stand-alone pre-increment for scalars with a passion.

I just prefer to write it in the same way for scalars and iterators, for c and c++.


Unfortunately, checkpatch does not.

Reviewed-by: Max Reitz <address@hidden>

(All of the above are me just nagging and giving optional suggestions.)

+            if (bitmap->levels[lev + 1][i]) {
+                bitmap->levels[lev][i >> BITS_PER_LEVEL] |=
+                    1UL << (i & (BITS_PER_LONG - 1));
+            }
+        }
+    }
+
+    bitmap->levels[0][0] |= 1UL << (BITS_PER_LONG - 1);
+}
+
  void hbitmap_free(HBitmap *hb)
  {
      unsigned i;



--
Best regards,
Vladimir




reply via email to

[Prev in Thread] Current Thread [Next in Thread]