qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v1 22/22] petalogix-ml605: Make the LMB visible


From: Peter Maydell
Subject: Re: [Qemu-devel] [PATCH v1 22/22] petalogix-ml605: Make the LMB visible only to the CPU
Date: Mon, 16 Dec 2013 13:29:47 +0000

On 16 December 2013 12:46, Andreas Färber <address@hidden> wrote:
> Thanks for this series. I've been on vacation so couldn't review the
> previous RFC yet... I'm not entirely happy with the way this is pushing
> work to the machines here and wonder if we can simplify that some more:
>
> For one, I don't like the allocation of AddressSpace and MemoryRegion at
> machine level. Would it be possible to enforce allocating a per-CPU
> AddressSpace and MemoryRegion at cpu.c level, ideally as embedded value
> rather than pointer field? Otherwise CPU hot-add is going to get rather
> complicated and error-prone.

This seems like a good place to stick my oar in about how I
think this should work in the long term...

My view is that AddressSpace and/or MemoryRegion pointers
(links?) should be how we wire up the addressing on machine
models, in an analogous manner to the way we wire up IRQs.
So to take A9MPCore as an example:

 * each individual ARMCPU has an AddressSpace * property
 * the 'a9mpcore' device should create those ARMCPU objects,
   and also the AddressSpaces to pass to them
 * the AddressSpace for each core is different, because it
   has the private peripherals for that CPU only (this
   allows us to get rid of all the shim memory regions which
   look up the CPU via current_cpu->cpu_index)
 * each core's AddressSpace has as a 'background region'
   the single AddressSpace which the board and/or SoC model
   has passed to the a9mpcore device
 * if there's a separate SoC device object from the board
   model, then again the AddressSpace the SoC device passes
   to a9mpcore is the result of the SoC mapping the various
   SoC-internal devices into an AddressSpace it got passed
   by the board
 * if the SoC has a DMA engine of some kind then the DMA
   engine should also be passed an appropriate AddressSpace
   [and we thus automatically correctly model the way the
   hardware DMA engine can't see the per-CPU peripherals]

You'll notice that this essentially gets rid of the "system
memory" idea...

I don't have a strong opinion about the exact details of who
is allocating what at what level, but I do think we need to
move towards an idea of handing the consumer of an address
space be passed an appropriate AS/MR which is constructed
by the same thing that creates that consumer.

I'm also not entirely clear on which points in this API
should be dealing with MemoryRegions and which with
AddressSpaces. Perhaps the CPU object should create its
AddressSpace internally and the thing it's passed as a
property should be a MemoryRegion * ?

> TCG loads/saves should always have a CPU[Arch]State associated. Would it
> work to always alias the system MemoryRegion again at cpu.c level with
> lowest priority for range [0,UINT64_MAX] and let derived CPUs do per-CPU
> MemoryRegions by adding MemoryRegions with higher priority?

I think that we should definitely not have individual CPUs
looking at the system memory region directly.

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]