qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] live migration vs device assignment (motivation)


From: Lan, Tianyu
Subject: Re: [Qemu-devel] live migration vs device assignment (motivation)
Date: Fri, 11 Dec 2015 15:32:04 +0800
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.4.0



On 12/11/2015 12:11 AM, Michael S. Tsirkin wrote:
On Thu, Dec 10, 2015 at 10:38:32PM +0800, Lan, Tianyu wrote:


On 12/10/2015 7:41 PM, Dr. David Alan Gilbert wrote:
Ideally, it is able to leave guest driver unmodified but it requires the
hypervisor or qemu to aware the device which means we may need a driver in
hypervisor or qemu to handle the device on behalf of guest driver.
Can you answer the question of when do you use your code -
    at the start of migration or
    just before the end?

Just before stopping VCPU in this version and inject VF mailbox irq to
notify the driver if the irq handler is installed.
Qemu side also will check this via the faked PCI migration capability
and driver will set the status during device open() or resume() callback.

Right, this is the "good path" optimization. Whether this buys anything
as compared to just sending reset to the device when VCPU is stopped
needs to be measured. In any case, we probably do need a way to
interrupt driver on destination to make it reconfigure the device -
otherwise it might take seconds for it to notice.  And a way to make
sure driver can handle this surprise reset so we can block migration if
it can't.


Yes, we need such a way to notify driver about migration status and do
reset or restore operation on the destination machine. My original
design is to take advantage of device's irq to do that. Driver can tell
Qemu that which irq it prefers to handle such task and whether the irq
is enabled or bound with handler. We may discuss the detail in the other
thread.


It would be great if we could avoid changing the guest; but at least your guest
driver changes don't actually seem to be that hardware specific; could your
changes actually be moved to generic PCI level so they could be made
to work for lots of drivers?

It is impossible to use one common solution for all devices unless the PCIE
spec documents it clearly and i think one day it will be there. But before
that, we need some workarounds on guest driver to make it work even it looks
ugly.

Yes, so far there is not hardware migration support

VT-D supports setting dirty bit in the PTE in hardware.

Actually, this doesn't support in the current hardware.
VTD spec documents the dirty bit for first level translation which
requires devices to support DMA request with PASID(process
address space identifier). Most device don't support the feature.


and it's hard to modify
bus level code.

Why is it hard?

As Yang said, the concern is that PCI Spec doesn't document about how to do migration.


It also will block implementation on the Windows.

Implementation of what?  We are discussing motivation here, not
implementation.  E.g. windows drivers typically support surprise
removal, should you use that, you get some working code for free.  Just
stop worrying about it.  Make it work, worry about closed source
software later.

Dave




reply via email to

[Prev in Thread] Current Thread [Next in Thread]