qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-ppc] [PATCH v3 11/14] ioport: Switch dispatching


From: Peter Maydell
Subject: Re: [Qemu-devel] [Qemu-ppc] [PATCH v3 11/14] ioport: Switch dispatching to memory core layer
Date: Fri, 12 Jul 2013 19:26:58 +0100

On 12 July 2013 18:49, Anthony Liguori <address@hidden> wrote:
> Benjamin Herrenschmidt <address@hidden> writes:
>> On Fri, 2013-07-12 at 05:18 +0200, Alexander Graf wrote:
>>> We model a single system wide io space today and access to that one
>>> happens through you pci host controller. I just messed up the
>>> terminology here.
>>
>> Single system wide IO space is broken. We have separate IO space per
>> PHB. That was working afaik.
>
> Hrm, probably not.  We don't propagate I/O spaces very well today.
>
>> In any case, I completely object to all that business with conversion in
>> bridges.
>>
>> That's fundamentally WRONG.

It's not wrong when the hardware actually does a byteswap at
some level in the memory hierarchy. You can see this for instance
on ARMv7M systems, where byteswapping for bigendian happens at
an intermediate level that not all accesses go through:

 [CPU] ---->  [byteswap here] --> [memory and ext. devices]
         |
         -->  [internal memory mapped devices]

so some things see always little endian regardless.

>> The whole business of endianness in qemu is a mess. In the end what
>> matters and the only thing that does is:
>
> It's not as bad as you think I suspect.
>
>>  * The endianness of a given memory access by the guest (which may or
>> may not be the endianness of the guest -> MSR:LE, byteswap load/store
>> instsructions, etc..)
>
> Correct.
>
>> vs.
>>
>>  * The endianness of the target device register (and I say register ...
>> a framebuffer does NOT have endianness per-se and thus accesses to BAR
>> mapping a "memory" range (framebuffer, ROM, ...) should go such that the
>> *byte order* of individual bytes is preserved, which typically means
>> untranslated).
>
> Yes.  To put it another way, an MMIO write is a store and depending on
> the VCPU, that will result in a write with a certain byte order.  That
> byte order should be preserved.
>
> However, what we don't model today, and why we have the silly
> endianness in MemoryRegionOps, is the fact that I/O may pass through
> multiple layers and those layers may change byte ordering.
>
> We jump through great hoops to have a flat dispatch table.  I've never
> liked it but that's what we do.  That means that in cases where a host
> bridge may do byte swapping, we cannot easily support that.

We could support that if we cared to -- you just have to have a
container MemoryRegion type which is a byte-swapping container
(or just have a flag on existing containers, I suppose).
Then as you flatten the regions into the flat table you keep
track of how many levels of byteswapping each region goes through,
and you end up with a single 'byteswap or not?' flag for each
section of your flat dispatch table.

(Our other serious endianness problem is that we don't really
do very well at supporting a TCG CPU arbitrarily flipping
endianness -- TARGET_WORDS_BIGENDIAN is a compile time setting
and ideally it should not be.)

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]