qemu-s390x
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 5/6] exec: Restrict 32-bit CPUs to 32-bit address space


From: Peter Maydell
Subject: Re: [PATCH 5/6] exec: Restrict 32-bit CPUs to 32-bit address space
Date: Mon, 1 Jun 2020 11:45:45 +0100

On Mon, 1 Jun 2020 at 09:09, Philippe Mathieu-Daudé <f4bug@amsat.org> wrote:
> On 5/31/20 9:09 PM, Peter Maydell wrote:
> > [*] Strictly speaking, it would depend on the
> > maximum physical address size used by any transaction
> > master in the system -- in theory you could have a
> > 32-bit-only CPU and a DMA controller that could be
> > programmed with 64-bit addresses. In practice the
> > CPU can generally address at least as much of the
> > physical address space as any other transaction master.
>
> Yes, I tried the Malta with 32-bit core, while the GT64120 northbridge
> addresses 64-bit:

> From "In practice the CPU can generally address at least as much of the
> physical address space as any other transaction master." I understand
> for QEMU @system address space must be as big as the largest transaction
> a bus master can do".

That depends on what happens for transactions that are off the end
of the range, I suppose -- usually a 32-bit CPU system design will
for obvious reasons not put ram or devices over 4GB, so if the
behaviour for a DMA access past 4GB is the same whether there's
nothing mapped there or whether the access is just off-the-end then
it doesn't matter how QEMU models it. I haven't tested to see what an
off-the-end transaction does, though.

I'm inclined to say that since 'hwaddr' is always a 64-bit type we should
stick to having the system memory address space be64 bits.

> I think what confuse me is what QEMU means by 'system-memory', I
> understand it historically as the address space of the first CPU core.

Historically I think it was more "there is only one address space and
this is it": it wasn't the first CPU's address space, it was what *every*
CPU saw, and what every DMA device used, because the APIs
pre-MemoryRegion had no concept of separate address spaces at all.
So system-memory starts off as a way to continue to provide those
old semantics in an AddressSpace/MemoryRegion design, and we've
then gradually increased the degree to which different transaction
masters use different AddressSpaces. Typically system-memory
today is often "whatever's common to all CPUs" (and then you
overlay per-CPU devices etc on top of that), but it might have
less stuff than that in it (I have a feeling the arm-sse SoCs put
less stuff into system-memory than you might expect). How much
freedom you have to not put stuff into the system-memory address
space depends on things like whether the guest architecture's
target/foo code or some DMA device model on the board still uses
APIs that don't specify the address space and instead use the
system address space.

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]