qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] 1.1.1 -> 1.1.2 migrate /managedsave issue


From: Doug Goldstein
Subject: Re: [Qemu-devel] 1.1.1 -> 1.1.2 migrate /managedsave issue
Date: Tue, 23 Oct 2012 15:38:47 -0500

On Mon, Oct 22, 2012 at 6:23 AM, Avi Kivity <address@hidden> wrote:
> On 10/22/2012 09:04 AM, Philipp Hahn wrote:
>> Hello Doug,
>>
>> On Saturday 20 October 2012 00:46:43 Doug Goldstein wrote:
>>> I'm using libvirt 0.10.2 and I had qemu-kvm 1.1.1 running all my VMs.
>> ...
>>> I had upgraded to qemu-kvm 1.1.2
>> ...
>>> qemu: warning: error while loading state for instance 0x0 of device 'ram'
>>> load of migration failed
>>
>> That error can be from many things. For me it was that the PXE-ROM images for
>> the network cards were updated as well. Their size changed over the next
>> power-of-two size, so kvm needed to allocate less/more memory and changed
>> some PCI configuration registers, where the size of the ROM region is stored.
>> On loading the saved state those sizes were compared and failed to validate.
>> KVM then aborts loading the saved state with that little helpful message.
>>
>> So you might want to check, if your case is similar to mine.
>>
>> I diagnosed that using gdb to single step kvm until I found
>> hw/pci.c#get_pci_config_device() returning -EINVAL.
>>
>
> Seems reasonable.  Doug, please verify to see if it's the same issue or
> another one.

Sorry it took a little bit to juggle the break points with libvirt and
qemu to get gdb attached correctly. But yes I can confirm that
vmstate_load_state() which is calling field->info->get() which is
calling get_pci_config_device() is returning -EINVAL.

>
> Juan, how can we fix this?  It's clear that the option ROM size has to
> be fixed and not change whenever the blob is updated.  This will fix it
> for future releases.  But what to do about the ones in the field?
>

Any recommendations to fix this? Or do I need to kill the saved state
and start over?

Thanks.
-- 
Doug Goldstein



reply via email to

[Prev in Thread] Current Thread [Next in Thread]