[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-block] [Qemu-devel] [PATCH] ahci: enable pci bus master Memory
Re: [Qemu-block] [Qemu-devel] [PATCH] ahci: enable pci bus master MemoryRegion before loading ahci engines
Tue, 10 Sep 2019 09:50:41 -0400
Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0
On 9/10/19 3:04 AM, Michael S. Tsirkin wrote:
> On Tue, Sep 10, 2019 at 01:18:37AM +0800, andychiu wrote:
>> If Windows 10 guests have enabled 'turn off hard disk after idle'
>> option in power settings, and the guest has a SATA disk plugged in,
>> the SATA disk will be turned off after a specified idle time.
>> If the guest is live migrated or saved/loaded with its SATA disk
>> turned off, the following error will occur:
>> qemu-system-x86_64: AHCI: Failed to start FIS receive engine: bad FIS
>> receive buffer address
>> qemu-system-x86_64: Failed to load ich9_ahci:ahci
>> qemu-system-x86_64: error while loading state for instance 0x0 of device
>> qemu-system-x86_64: load of migration failed: Operation not permitted
>> Observation from trace logs shows that a while after Windows 10 turns off
>> a SATA disk (IDE disks don't have the following behavior),
>> it will disable the PCI_COMMAND_MASTER flag of the pci device containing
>> the ahci device. When the the disk is turning back on,
>> the PCI_COMMAND_MASTER flag will be restored first.
>> But if the guest is migrated or saved/loaded while the disk is off,
>> the post_load callback of ahci device, ahci_state_post_load(), will fail
>> at ahci_cond_start_engines() if the MemoryRegion
>> pci_dev->bus_master_enable_region is not enabled, with pci_dev pointing
>> to the PCIDevice struct containing the ahci device.
>> This patch enables pci_dev->bus_master_enable_region before calling
>> ahci_cond_start_engines() in ahci_state_post_load(), and restore the
>> MemoryRegion to its original state afterwards.
>> Signed-off-by: andychiu <address@hidden>
> Poking at PCI device internals like this seems fragile. And force
> enabling bus master can lead to unpleasantness like corrupting guest
> memory, unhandled interrupts, etc. E.g. it's quite reasonable,
> spec-wise, for the guest to move thing in memory around while bus
> mastering is off.
> Can you teach ahci that region being disabled
> during migration is ok, and recover from it?
That's what I'm wondering.
I could try to just disable the FIS RX engine if the mapping fails, but
that will require a change to guest visible state.
My hunch, though, is that when windows re-enables the device it will
need to re-program the address registers anyway, so it might cope well
with the FIS RX bit getting switched off.
(I'm wondering if it isn't a mistake that QEMU is trying to re-map this
address in the first place. Is it legal that the PCI device has pci bus
master disabled but we've held on to a mapping? Should there be some
callback where AHCI knows to invalidate mappings at that point...?)