On Mon, Mar 02, 2015 at 06:20:00PM -0500, John Snow wrote:
+static void dirty_bitmap_truncate(BdrvDirtyBitmap *bitmap, uint64_t size)
+{
+ /* Should only be frozen during a block backup job, which should have
+ * blocked any resize actions. */
+ assert(!bdrv_dirty_bitmap_frozen(bitmap));
+ hbitmap_truncate(bitmap->bitmap, size);
+}
+
+void bdrv_dirty_bitmap_truncate(BlockDriverState *bs)
+{
+ BdrvDirtyBitmap *bitmap;
+ uint64_t size = bdrv_nb_sectors(bs);
+
+ QLIST_FOREACH(bitmap, &bs->dirty_bitmaps, list) {
+ if (bdrv_dirty_bitmap_frozen(bitmap)) {
+ continue;
+ }
+ dirty_bitmap_truncate(bitmap, size);
If you inline this function here then the discussion about assert() vs
skipping frozen bitmaps goes away. Why is dirty_bitmap_truncate() a
function?
/**
+ * hbitmap_truncate:
+ * @hb: The bitmap to change the size of.
+ * @size: The number of elements to change the bitmap to accommodate.
+ *
+ * truncate or grow an existing bitmap to accommodate a new number of elements.
+ * This may invalidate existing HBitmapIterators.
+ */
+void hbitmap_truncate(HBitmap *hb, uint64_t size);
Please include a tests/test-hbitmap.c test case.
Interesting cases:
1. New size equals old size (odd but possible)
2. Growing less than sizeof(unsigned long)
3. Growing more than sizeof(unsigned long)
4. Shrinking less than sizeof(unsigned long)
5. Shrinking more than sizeof(unsigned long)
+void hbitmap_truncate(HBitmap *hb, uint64_t size)
+{
+ bool truncate;
+ unsigned i;
+ uint64_t num_elements = size;
+ uint64_t old;
+
+ /* Size comes in as logical elements, adjust for granularity. */
+ size = (size + (1ULL << hb->granularity) - 1) >> hb->granularity;
+ assert(size <= ((uint64_t)1 << HBITMAP_LOG_MAX_SIZE));
+ truncate = size < hb->size;
Here "truncate" means "shrink".
"shrink" is a clearer name since the function name is already "truncate"
but that concept includes both increasing or decreasing size.
It would be clearer to calculate 'old' alongside 'size' each loop
iteration. The size[] field can be dropped, 'old' becomes 'old_size',
and 'size' becomes 'new_size':
old_size = hb->size;
for (i = HBITMAP_LEVELS; i-- > 0; ) {
old_size = MAX((old_size + BITS_PER_LONG - 1) >> BITS_PER_LEVEL, 1);
new_size = MAX((new_size + BITS_PER_LONG - 1) >> BITS_PER_LEVEL, 1);
+ hb->levels[i] = g_realloc_n(hb->levels[i], size, sizeof(unsigned
long));
+ if (!truncate) {
+ memset(&hb->levels[i][old], 0x00,
+ (size - old) * sizeof(*hb->levels[i]));
+ }
+ }
+ assert(size == 1);
+
+ /* Clear out any "extra space" we may have that the user didn't request:
+ * It may have garbage data in it, now. */
+ if (truncate) {
+ /* Due to granularity fuzziness, we may accidentally reset some of
+ * the last bits that are actually valid. So, record the current value,
+ * reset the "dead range," then re-set the one element we care about.
*/
+ uint64_t fix_count = (hb->size << hb->granularity) - num_elements;
+ if (fix_count) {
+ bool set = hbitmap_get(hb, num_elements - 1);
+ hbitmap_reset(hb, num_elements, fix_count);
+ if (set) {
+ hbitmap_set(hb, num_elements - 1, 1);
+ }
+ }
Calling hbitmap_reset() with an out-of-bounds index seems hacky to me.