qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] QOMification of AXI stream


From: Anthony Liguori
Subject: Re: [Qemu-devel] [RFC] QOMification of AXI stream
Date: Mon, 11 Jun 2012 13:35:43 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:11.0) Gecko/20120329 Thunderbird/11.0.1

On 06/11/2012 12:31 PM, Avi Kivity wrote:
On 06/11/2012 06:01 PM, Anthony Liguori wrote:
On 06/11/2012 08:39 AM, Peter Maydell wrote:
On 11 June 2012 14:15, Anthony Liguori<address@hidden>   wrote:
   From what you said earlier, it's basically:

'write data to this address'
'read data from this address'

An interface that implements this is DMAContext.  Forget about the
fact that
'DMA' is in the name.  It's really the symmetric version of a
MemoryRegion.

...so can we fix the name?

Perhaps we should just make MemoryRegion work in both directions?

Ben/Avi, what do you guys think?


The other direction is currently cpu_physical_memory_rw().

Right, and with benh's proposal, it's dma_memory_rw(). It also adds a DMAContext parameter.

I can't help think that the contents of DMAContext is awfully similar to MemoryRegionOps.

 We do need
to support issuing transactions from arbitrary points in the memory
hierarchy, but I don't think a device's MemoryRegion is the right
interface.  Being able to respond to memory transactions, and being able
to issue them are two different things.

But an IOMMU has to be able to respond to a memory transaction. Many of the things it may do (like endianness conversion) also happen to already exist in the memory API.

In fact we will probably have to add more details to the memory
hierarchy.  Currently (for example) we model the memory hub passing
transactions destined for the various pci windows to the pci bus, but we
don't model the memory hub receiving a pci-initiated transaction and
passing it to the system bus.  We simply pretend it originated on the
system bus in the first place.  Perhaps we need parallel hierarchies:

    system_memory
       alias ->  pci
       alias ->  ram
    pci
       bar1
       bar2
    pcibm
       alias ->  pci  (prio 1)
       alias ->  system_memory (prio 0)

cpu_physical_memory_rw() would be implemented as
memory_region_rw(system_memory, ...) while pci_dma_rw() would be
implemented as memory_region_rw(pcibm, ...).  This would allow different
address transformations for the two accesses.

Yeah, this is what I'm basically thinking although I don't quite understand what 'pcibm' stands for.

My biggest worry is that we'll end up with parallel memory API implementations split between memory.c and dma.c.

Regards,

Anthony Liguori






reply via email to

[Prev in Thread] Current Thread [Next in Thread]