qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] QOMification of AXI streams


From: Anthony Liguori
Subject: Re: [Qemu-devel] [RFC] QOMification of AXI streams
Date: Mon, 11 Jun 2012 17:29:06 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:11.0) Gecko/20120329 Thunderbird/11.0.1

On 06/11/2012 05:00 PM, Benjamin Herrenschmidt wrote:
     system_memory
        alias ->   pci
        alias ->   ram
     pci
        bar1
        bar2
     pcibm
        alias ->   pci  (prio 1)
        alias ->   system_memory (prio 0)

cpu_physical_memory_rw() would be implemented as
memory_region_rw(system_memory, ...) while pci_dma_rw() would be
implemented as memory_region_rw(pcibm, ...).  This would allo
different address transformations for the two accesses.

Yeah, this is what I'm basically thinking although I don't quite
understand what  'pcibm' stands for.

My biggest worry is that we'll end up with parallel memory API
implementations split between memory.c and dma.c.

So it makes some amount of sense to use the same structure. For example,
if a device issues accesses, those could be caught by a sibling device
memory region... or go upstream.

Let's just look at downstream transformation for a minute...

We do need to be a bit careful about transformation here: I need to
double check but I don't think we do transformation downstream today in
a clean way and we'd have to do that. IE. On pseries for example, the
PCI host bridge has a window in the CPU address space of [A...A+S], but
accesses to that window generates PCI cycles with different addresses
[B...B+S] (with typically A and B both being naturally aligned on S so
it's just a bit masking in HW).

I don't know that we really have bit masking done right in the memory API.

When we add a subregion, it always removes the offset from the address when it dispatches. This more often than not works out well but for what you're describing above, it sounds like you'd really want to get an adjusted size (that could be transformed).

Today we generate a linear dispatch table. This prevents us from applying device-level transforms.

We somewhat implements that in spapr_pci today since it works but I
don't quite understand how :-) Or rather, the terminology "alias" seems
to be fairly bogus, we aren't talking about aliases here...

So today we create a memory region with an "alias" (whatever that means)
that is [B...B+S] and add a subregion which is [A...A+S]. That seems to
work but but it's obscure.

If I was to implement that, I would make it so that the struct
MemoryRegion used in that hierarchy contains the address in the local
domain -and- the transformed address in the CPU domain, so you can still
sort them by CPU addresses for quick access and make this offsetting a
standard property of any memory region since it's very common that
busses drop address bits along the way.

Now, if you want to use that structure for DMA, what you need to do
first is when an access happens, walk up the region tree and scan for
all siblings at every level, which can be costly.

So if you stick with the notion of subregions, you would still have a single MemoryRegion at the PCI bus layer that has all of it's children as sub regions. Presumably that "scan for all siblings" is a binary search which shouldn't really be that expensive considering that we're likely to have a shallow depth in the memory hierarchy.


Additionally to handle iommu's etc... you need the option for a given
memory region to have functions to perform the transformation in the
upstream direction.

I think that transformation function lives in the bus layer MemoryRegion. It's a bit tricky though because you need some sort of notion of "who is asking". So you need:

dma_memory_write(MemoryRegion *parent, DeviceState *caller,
                 const void *data, size_t size);

This could be simplified at each layer via:

void pci_device_write(PCIDevice *dev, const void *data, size_t size) {
    dma_memory_write(dev->bus->mr, DEVICE(dev), data, size);
}

To be true to the HW, each bridge should have its memory region, so a
setup with

       /pci-host
           |
           |--/p2p
                |
               |--/device

Any DMA done by the device would walk through the p2p region to the host
which would contain a region with transform ops.

However, at each level, you'd have to search for sibling regions that
may decode the address at that level before moving up, ie implement
essentially the equivalent of the PCI substractive decoding scheme.

Not quite... subtractive decoding only happens for very specific devices IIUC. For instance, an PCI-ISA bridge. Normally, it's positive decoding and a bridge has to describe the full region of MMIO/PIO that it handles.

So it's only necessary to transverse down the tree again for the very special case of PCI-ISA bridges. Normally you can tell just by looking at siblings.

That will be a significant overhead for your DMA ops I believe, though
doable.

Worst case scenario, 256 devices with what, a 3 level deep hierarchy? we're still talking about 24 simple address compares. That shouldn't be so bad.

Then we'd have to add map/unmap to MemoryRegion as well, with the
understanding that they may not be supported at every level...

map/unmap can always fall back to bounce buffers.

So yeah, it sounds doable and it would handle what DMAContext doesn't
handle which is access to peer devices without going all the way back to
the "top level", but it's complex and ... I need something in qemu
1.2 :-)

I think we need a longer term vision here. We can find incremental solutions for the short term but I'm pretty nervous about having two parallel APIs only to discover that we need to converge in 2 years.

Regards,

Anthony Liguori


In addition there's the memory barrier business so we probably want to
keep the idea of having DMA specific accessors ...

Could we keep the DMAContext for now and just rename it to MemoryRegion
(keeping the accessors) when we go for a more in depth transformation ?

Cheers,
Ben.







reply via email to

[Prev in Thread] Current Thread [Next in Thread]