qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/1] pci: allow 0 address for PCI IO/MEM regions


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH 0/1] pci: allow 0 address for PCI IO/MEM regions
Date: Tue, 13 Jan 2015 17:54:46 +0200

On Tue, Jan 13, 2015 at 09:34:42AM -0600, Michael Roth wrote:
> Quoting Michael S. Tsirkin (2015-01-13 04:12:19)
> > On Mon, Jan 12, 2015 at 07:24:06AM -0600, Michael Roth wrote:
> > > Quoting Michael Roth (2014-12-23 13:33:35)
> > > > This patch enables the programming of address 0 for IO/MMIO BARs for
> > > > PCI devices.
> > > > 
> > > > It was originally included as part of a series implementing PCI
> > > > hotplug for pseries guests, where it is needed due to the fact
> > > > that pseries guests access IO space via MMIO, and that IO
> > > > space is dedicated to PCI devices, with RTAS calls being used in
> > > > place of common/legacy IO ports such as config-data/config-address.
> > > > 
> > > > Thus, the entire range is unhindered by legacy IO ports, and
> > > > pseries guest kernels may attempt to program an IO BAR to address 0
> > > > as a result.
> > > > 
> > > > This has led to a conflict with the existing PCI config space
> > > > emulation code, where it has been assumed that 0 address are always
> > > > invalid.
> > > > 
> > > > Some background from discussions can be viewed here:
> > > > 
> > > >   https://lists.nongnu.org/archive/html/qemu-devel/2014-08/msg03063.html
> > > > 
> > > > The general summary from that discussion seems to be that 0-addresses 
> > > > are
> > > > not (at least, are no longer) prohibited by current versions of the PCI
> > > > spec, and that the same should apply for MMIO addresses (where allowing
> > > > 0-addresses are also needed for some ARM-based PCI controllers).
> > > > 
> > > > This patch includes support for 0-address MMIO BARs based on that
> > > > discussion.
> > > > 
> > > > One still-lingering concern is whether this change will impact
> > > > compatibility with guests where 0-addresses are invalid. There was
> > > > some discussion on whether this issue could be addressed using
> > > > memory region priorities, but I think that's still an open question
> > > > that we can hopefully address here.
> > > 
> > > Ping
> > 
> > Michael, I can't apply this patch to all platforms: guests program 0 address
> > and expect that to not override all system devices.
> > 
> > If you want a quick hack, you need to find a way to make this apply
> > only to pseries.
> > 
> > One quick work-around for pseries is to limit your patch to devices
> > behind a pci to pci bridge: I think pseries places most devices behind
> > such bridges, am I right?
> 
> Not to my knowledge. It's true that pseries guests on PowerVM generally
> (but not always) get a dedicated host bridge for hotplugged
> devices/expansion slots. Maybe that's what you're referring to?
> 
> > 
> > The right solution is to locate, for each target, all system devices
> > that overlap with pci space, and figure out what the correct priorities
> > are. It's far from trivial, which is likely why no one did this yet.
> 
> So there are cases where system devices should have higher priorities,
> and others where PCI devices can re-use the addresses?

I'm not sure. On PC APIC should have higher priority but doesn't.
We'll have to examine each system on case by case basis.

> Would it be possible to have a check to see if a BAR overlaps a region,
> and then examine the mr->owner of the region to determine if it's a PCI
> device? If it's not, then assume it's a system device and avoid mapping
> the BAR as we do now? This would maintain the current behavior while
> allowing devices to use address 0 when it hasn't been claimed by a
> system device.
> 
> I'm sure it's not that easy, but if it a catch-all solution of this
> sort it acceptable/work-able I can look into it.

Sounds like just giving system devices higher priority than
system devices.

I think we already do this for PC:
        commit 83d08f2673504a299194dcac1657a13754b5932a
        Author: Michael S. Tsirkin <address@hidden>
        Date:   Tue Oct 29 13:57:34 2013 +0100

            pc: map PCI address space as catchall region for not mapped 
addresses

but we need to find and fix all other targets.



> Otherwise, as far as quick fixes, I was thinking of adding a
> allow_zero_addr flag to TYPE_PCI_HOST_BRIDGE that can be set by
> instance_init for the various implementations of it. That would allow us
> to enable it for pseries, as well as (I think) some of the ARM use-cases
> Peter mentioned in the older thread.

It's ugly, but possible.


> > 
> > Another issue is that - assuming what we are targeting is purely
> > theorectical PCI spec compliance - the patch does not go far enough.
> > We also should drop the check for the all-ones pattern in the
> > same function, that, too, should be a platform thing.
> > 
> > In particular, things break badly if guests size BARs e.g. using e.g.
> > 8 single-byte accesses, each one being a write of 0xff followed by read and
> > write of 0x00, as opposed to two 32 bit writes of 0xffffffff followed
> > by reads and then writes of 0x0000000 - which no one seems to go
> > but is definitely legal.
> 
> Yikes. Personally I'm more focused on a specific bug, but perhaps
> controlling this via a host-bridge flag also makes sense?

Since it's a theoretical issue (no known guest needs the fix)
I'm not inclined to add more hacks.
The right fix is probably to teach e.g. kvm to find out legal
range of physical addresses from CPU flags, and
skip ones that are out of range when passing them to kernel.


> > 
> > -- 
> > MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]