qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH] vpc: Ignore geometry for large images


From: Charles Arnold
Subject: Re: [Qemu-devel] [RFC PATCH] vpc: Ignore geometry for large images
Date: Thu, 12 Feb 2015 14:01:57 -0700

>>> On 2/12/2015 at 12:05 PM, Peter Lieven <address@hidden> wrote: 

> Am 12.02.2015 um 18:18 schrieb Charles Arnold <address@hidden>:
> 
>>>>> On 2/12/2015 at 03:23 AM, Kevin Wolf <address@hidden> wrote:
>>> Am 12.02.2015 um 11:09 hat Peter Lieven geschrieben:
>>>>> Am 12.02.2015 um 11:06 schrieb Kevin Wolf:
>>>>> Am 12.02.2015 um 11:02 hat Peter Lieven geschrieben:
>>>>>>> Am 12.02.2015 um 10:58 schrieb Kevin Wolf:
>>>>>>> Am 12.02.2015 um 10:23 hat Peter Lieven geschrieben:
>>>>>>>>> Am 10.02.2015 um 15:53 schrieb Kevin Wolf:
>>>>>>>>> Am 10.02.2015 um 15:00 hat Peter Lieven geschrieben:
>>>>>>>>>>> Am 10.02.2015 um 14:54 schrieb Kevin Wolf:
>>>>>>>>>>> Am 10.02.2015 um 14:42 hat Jeff Cody geschrieben:
>>>>>>>>>>>> On Tue, Feb 10, 2015 at 02:34:14PM +0100, Kevin Wolf wrote:
>>>>>>>>>>>>> Am 10.02.2015 um 12:41 hat Peter Lieven geschrieben:
>>>>>>>>>>>>>> Am 09.02.2015 um 17:09 schrieb Kevin Wolf:
>>>>>>>>>>>>>>> The CHS calculation as done per the VHD spec imposes a maximum
>>>>>>>>>>>>>>> image size of ~127 GB. Real VHD images exist that are larger 
>>>>>>>>>>>>>>> than
>>>>>>>>>>>>>>> that.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Apparently there are two separate non-standard ways to achieve
>>>>>>>>>>>>>>> this: You could use more heads than the spec does - this is the
>>>>>>>>>>>>>>> option that qemu-img create chooses.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> However, other images exist where the geometry is set to the
>>>>>>>>>>>>>>> maximum (65536/16/255), but the actual image size is larger.
>>>>>>>>>>>>>>> Until now, such images are truncated at 127 GB when opening them
>>>>>>>>>>>>>>> with qemu.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> This patch changes the vpc driver to ignore geometry in this 
>>>>>>>>>>>>>>> case
>>>>>>>>>>>>>>> and only trust the size field in the header.
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Signed-off-by: Kevin Wolf <address@hidden> ---
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Peter, I'm replacing some of your code in the hope that the new
>>>>>>>>>>>>>>> approach is more generally valid. Of course, I haven't tested if
>>>>>>>>>>>>>>> your case with disk2vhd is still covered. Could you check this,
>>>>>>>>>>>>>>> please?
>>>>>>>>>>>>>> I checked this and found that disk2vhd always sets CHS to 
>>>>>>>>>>>>>> 65535ULL
>>>>>>>>>>>>>> * 16 * 255 independed of the real size.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> But, as the conversion to CHS may have an error its maybe the 
>>>>>>>>>>>>>> best
>>>>>>>>>>>>>> solution to ignore CHS completely and always derive total_sectors
>>>>>>>>>>>>>> from footer->size unconditionally.
>>>>>>>>>>>>>> I had a look at what virtualbox does and they only rely on
>>>>>>>>>>>>>> footer->size. If they alter the size or create an image the write
>>>>>>>>>>>>>> the new size into the footer and recalculate CHS by the formula
>>>>>>>>>>>>>> found in the appendix of the original spec.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Check vhdCreateImage, vhdOpen in
>>>>>>>>>>>>>> http://www.virtualbox.org/svn/vbox/trunk/src/VBox/Storage/VHD.cpp
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> The original spec also says that CHS values purpose is the use in
>>>>>>>>>>>>>> an ATA controller only.
>>>>>>>>>>>>> The problem with just using footer->size back then when I
>>>>>>>>>>>>> implemented this was that from the perspective of a VirtualPC 
>>>>>>>>>>>>> guest
>>>>>>>>>>>>> run in qemu, the size of its hard disk would change, which you 
>>>>>>>>>>>>> don't
>>>>>>>>>>>>> want either. Going from VPC to qemu would be ugly, but mostly
>>>>>>>>>>>>> harmless as the disk only grows. But if you use an image in qemu
>>>>>>>>>>>>> where the disk looks larger and then go back to VPC which respects
>>>>>>>>>>>>> geometry, your data may be truncated.
>>>>>>>>>>>> I believe the vpc "creator" field is different if the image was
>>>>>>>>>>>> created by Virtual PC, versus created by Hyper-V ("vpc" and "win",
>>>>>>>>>>>> respectively, I think).  Perhaps we could use that to infer a guest
>>>>>>>>>>>> image came from VirtualPC, and thus not use footer->size in that
>>>>>>>>>>>> scenario?
>>>>>>>>>>> Right, I think we discussed that before. Do you remember the 
>>>>>>>>>>> outcome of
>>>>>>>>>>> that discussion? I seem to remember that we had a conclusion, but
>>>>>>>>>>> apparently it was never actually implemented.
>>>>>>>>>>> 
>>>>>>>>>>> Would your proposal be to special-case "vpc" to apply the geometry, 
>>>>>>>>>>> and
>>>>>>>>>>> everything else (including "win", "d2v" and "qemu") would use the 
>>>>>>>>>>> footer
>>>>>>>>>>> field?
>>>>>>>>>> That sounds reasonable. In any case we have to fix qemu-img create
>>>>>>>>>> to do not create out of spec geometry for images larger than 127G.
>>>>>>>>>> It should set the correct footer->size and then calculate the 
>>>>>>>>>> geometry.
>>>>>>>>> Do I understand correctly that you just volunteered to fix up that 
>>>>>>>>> whole
>>>>>>>>> thing? ;-)
>>>>>>>> I knew that this would happen ;-)
>>>>>>>> 
>>>>>>>> Regarding the C/H/S calculation. I was just wondering if we should
>>>>>>>> not set this to maximum (=invalid?) for all newly created images.
>>>>>>>> That is what disk2vhd does.
>>>>>>> CHS is what Virtual PC relies on. So I guess if you did that, you
>>>>>>> would render images unusable by it. Are you sure that disk2vhd does this
>>>>>>> always? I would have thought that it only does it for large images.
>>>>>> At least 2.0.1 (latest available version) does this as well as the 
>>>>>> version
>>>>>> that I used when I added the hack for d2v creator.
>>>>>> 
>>>>>> Virtual PC would not be able to use images we create with qemu-img create
>>>>>> if we use footer->size (which I suppose to reanme to footer->cur_size, 
>>>>>> btw)
>>>>>> to calculate bs->total_sectors because we might write data to the end of
>>>>>> the image which gets truncated in CHS format.
>>>>> These kinds of problems are why I'd like to keep CHS and size always
>>>>> consistent when creating an image with qemu-img.
>>>> 
>>>> Okay, then I would vote for your RFC patch + fixing qemu-img create
>>>> to not generate out of spec CHS values and just set maximum which
>>>> then would make vpc_open use footer->size.
>>> 
>>> Really the RFC patch or what we discussed above ("vpc" creator = CHS,
>>> everything else = footer->size)? Once I know what we prefer, I'll send
>>> the real patch.
>>> 
>>> As for heads > 16, that would essentially mean reverting 258d2edb.
>>> Should be easy to do, the harder part is probably the commit message
>>> explaining why it's helpful and safe. Note that the commit message of
>>> 258d2edb claims that it's not out of spec. I _think_ we can do the
>>> revert with a good explanation, but I'll leave that to you.
>>> 
>>> (CCed Charles who wrote that commit)
>> 
>> IIUC, the plan is to revert my old commit and use the footer->size field to
>> describe images greater than 127 GB.  This change would break other tools
>> from Virtual PC, Xens vhd-util and maybe others from reading images greater
>> than 127 GB because the head field would be forced back to using 16 and
>> these tools won't know to check the footer->size field.  Is there any 
>> reason not to keep the original commit and still use the footer->size field?
> 
> do you have a Pointer to a spec that is newer than 2006?

No, that is the most recent.

> the one i have 
> describes CHS calculation up to 65535 x 16 x 255 sectors. that is set as 
> Maximum if total sectors is higher.
> I would do the same when writing a 
> footer. in vhd_open I would derive total_sectors from C x H x S except for 
> the case that it is exactly 65535 x 16 x 255. In this case I would take 
> footer->size / 512.
> Virtualbox does it that way and at the comment from Stefan in the commit 
> message for your Patch suggest that you observed a similar behaviour for 
> HyperV.

Right, although this was a long time ago for me to remember the specifics :)
In the end I think supporting at least the 2 TB size allowed by the spec is what
we need while not breaking existing images.

- Charles




reply via email to

[Prev in Thread] Current Thread [Next in Thread]