[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH qemu] spapr-pci: Make MMIO spacing a machine pro

From: Alexey Kardashevskiy
Subject: Re: [Qemu-devel] [PATCH qemu] spapr-pci: Make MMIO spacing a machine property and increase it
Date: Mon, 21 Mar 2016 13:15:05 +1100
User-agent: Mozilla/5.0 (X11; Linux i686 on x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.7.0

On 03/09/2016 12:04 PM, David Gibson wrote:
On Tue, Mar 08, 2016 at 10:50:51AM +1100, Alexey Kardashevskiy wrote:
On 03/04/2016 03:13 PM, Alexey Kardashevskiy wrote:
On 03/04/2016 02:39 PM, David Gibson wrote:
On Thu, Mar 03, 2016 at 12:42:53PM +1100, Alexey Kardashevskiy wrote:
The pseries machine supports multiple PHBs. Each PHB's MMIO/IO space is
mapped to the CPU address space starting at SPAPR_PCI_WINDOW_BASE plus
some offset which is calculated from PHB's index and
SPAPR_PCI_WINDOW_SPACING which is defined now as 64GB.

Since the default 32bit DMA window is using first 2GB of MMIO space,
the amount of MMIO which the PCI devices can actually use is reduced
to 62GB. This is a problem if the user wants to use devices with
huge BARs.

For example, 2 PCI functions of a NVIDIA K80 adapter being passed through
will exceed this limit as they have 16M + 16G + 32M BARs which
(when aligned) will need 64GB.

sPAPRMachineState properties. This uses old values for pseries machine
before 2.6 and increases the spacing to 128GB so MMIO space becomes 126GB.

This changes the default value of sPAPRPHBState::mem_win_size to -1 for
pseries-2.6 and adds setup to spapr_phb_realize.

Signed-off-by: Alexey Kardashevskiy <address@hidden>

So, in theory I dislike the spapr_pci device reaching into the machine
type to get the spacing configuration.  But.. I don't know of a better
way to achieve the desired outcome.

We could drop @index and spacing; and request the user to specify the MMIO
window start (at least) for every additional PHB.

So what is the decision? :)

There isn't one.  I really don't know how to handle this, trying to
talk to some people for ideas.

Got any new idea?


reply via email to

[Prev in Thread] Current Thread [Next in Thread]