qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] vpc max table entries rounding


From: Michael Winslow
Subject: [Qemu-devel] vpc max table entries rounding
Date: Mon, 1 May 2017 20:54:31 +0000

I recently had a problem in uploading a dynamic vpc image created by qemu-img 
to Azure.  When creating a dynamic vpc image with qemu-img, the bat size is 
created by rounding up PhysicalSize/BlockSize.  It needs to be rounded up this 
way to contain the end blocks of the disk.

Uploading the image to Azure, the dynamic image gets converted to a fixed type 
image (Azure VMs only work with fixed type vpc images).  In doing so, BlockSize 
* MaxTableEntries space is allocated.  That is, the footer is placed after that 
much space within the fixed image file.  This is done without changing the 
current size or original size fields in the image footer.  When Azure then goes 
to instantiate a VM based on this disk image, it finds that the space allocated 
does not match the disk size specified in the image footer, and rejects the 
image on that basis.  The error I'm seeing looks like this:

VHD footer 'Current Size' field validation failed. VHD for disk 
'cli272bc210215b6e94-os-1492546232759' with blob 
https://foo.blob.core.windows.net/disks/blah.vhd specifies size (272734617600) 
which does not match VHD blob size (272736715264) - VHD Footer Size (512).

This seems to me to be a problem in the upload utility.  However qemu-img could 
make dynamic vpc images that are still usable with Azure upload utilities if 
the user chooses the right disk size and it made the rounding calculation 
slightly differently.  The calculation is currently done in block/vpc.c 
create_dynamic_disk() by doing this:

    num_bat_entries = (total_sectors + block_size / 512) / (block_size / 512);

This rounds up.  But if total_sectors happens to be a whole-number multiple of 
block_size it allocates one more bat entry than is necessary.  By changing this 
calculation to:

    num_bat_entries = (total_sectors - 1 + block_size / 512) / (block_size / 
512);

enough bat entries continue to be allocated in the case where total_sectors is 
a whole-number multiple of block-size and preserves the rounding up where it is 
not.  And the upload utility now will not change the disk size if the user 
chooses a disk size that is a multiple of the block size.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]