[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] PCI access virtualization

From: Paul Brook
Subject: Re: [Qemu-devel] PCI access virtualization
Date: Thu, 5 Jan 2006 18:10:54 +0000
User-agent: KMail/1.8.3

On Thursday 05 January 2006 17:40, Mark Williamson wrote:
> > - IRQ sharing. Sharing host IRQs between native and virtualized devices
> > is hard because the host needs to ack the interrupt in the IRQ handler,
> > but doesn't really know how to do that until after it's run the guest to
> > see what that does.
> Could maybe have the (inevitable) kernel portion of the code grab the
> interrupt, and not ack it until userspace does an ioctl on a special file
> (or something like that?).  There are patches floating around for userspace
> IRQ handling, so I guess that could work.

This still requires cooperation from both sides (ie. both the host and guest 

> > - DMA. qemu needs to rewrite DMA requests (in both directions) because
> > the guest physical memory won't be at the same address on the host.
> > Inlike ISA where there's a fixed DMA engine, I don't think there's any
> > general way of
> I was under the impression that you could get reasonably far by emulating a
> few of the most popular commercial DMA engine chips and reissuing
> address-corrected commands to the host.  I'm not sure how common it is for
> PCI cards to use custom DMA chips instead, though...

IIUC PCI cards don't really have "DMA engines" as such. The PCI bridge just 
maps PCI address space onto physical memory. A Busmaster PCI device can then 
make arbitrary acceses whenever it wants. I expect the default mapping is a 
1:1 mapping of the first 4G of physical ram.

> > There are patches that allow virtualization of PCI devices that don't use
> > either of the above features. It's sufficient to get some Network cards
> > working, but that's about it.
> I guess PIO should be easy to work in any case.


> > > I vaguely heard of a feature present in Xen, which allows to assign PCI
> > > devices to one of the guests. I understand Xen works different than
> > > QEMU, but maybe is would be possible to implement something similar.
> >
> > Xen is much easier because it cooperates with the host system (ie. xen),
> > so both the above problems can be solved by tweaking the guest OS
> > drivers/PCI subsystem setup.
> Yep, XenLinux redefines various macros that were already present to do
> guest-physical <-> host-physical address translations, so DMA Just Works
> (TM).
> > If you're testing specific drivers you could probably augment these
> > drivers to pass the extra required information to qemu. ie. effectively
> > use a special qemu pseudo-PCI interface rather than the normal piix PCI
> > interface.
> How about something like this?:
> I'd imagine you could get away with a special header file with different
> macro defines (as for Xen, above), just in the driver in question, and a
> special "translation device / service" available to the QEmu virtual
> machine - could be as simple as "write the guest physical address to an IO
> port, returns the real physical address on next read".  The virt_to_bus
> (etc) macros would use the translation service to perform the appropriate
> translation at runtime.

That's exactly the sort of thing I meant. Ideally you'd just implement it as a 
different type of PCI bridge, and everything would just work. I don't know if 
linux supports such heterogeneous configurations though.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]