qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Guidance on emulating "sparse" address spaces


From: Jason Thorpe
Subject: Guidance on emulating "sparse" address spaces
Date: Wed, 23 Jun 2021 17:27:48 -0700

As a "learn the internals of Qemu a little better" exercise, I am planning to 
write models for some older Alpha systems, initially for one based on the 
LCA45.  One of the quirks of these old systems, though, is lack of byte/word 
load/store.  So, to support 8- and 16-bit accesses to I/O devices, the PCI 
interfaces on these systems implement "sparse" I/O and memory spaces (some also 
implement a "dense" space that can be used for e.g. frame buffers; more on that 
another time).

The way the sparse spaces work is that the address space is exploded out, and 
the CPU-visible address used to perform the access is computed using the 
desired bus address along with a field to specify the byte-enables.

Using the 21066's IOC as an example, PCI I/O addresses 0000.0000 - 00ff.ffff 
are mapped to 1.c000.0000 - 0x1.dfff.fff.  The offset into the I/O space is 
shifted left 5 bits, and the byte-enables code is shifted left 3 bits, and both 
are added to the base address of PCI I/O space in the system memory map 
(1.c000.0000) resulting in the system physical address to use in a 32-bit 
load/store.  Software then does e.g. a 32-bit read to that location, and 
extracts the value out the relevant field.

Further complicating things ... it's possible for the bus region that's mapped 
into the system address space to not begin at 0.  As a hypothetical example you 
might have a PCI sparse memory space that maps PCI memory addresses 1000.0000 - 
1fff.ffff.  The 2117x chipsets used on EV5/EV56 is a concrete example of a PCI 
interface that implements multiple windows for each space type.

I'm trying to wrap my head around how to achieve this in Qemu.  I don't see an 
obvious way from my initial study of how the PCI code and memory regions work.  
Some guidance would be appreciated!

Thx.

-- thorpej




reply via email to

[Prev in Thread] Current Thread [Next in Thread]