[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH 3/3] prep: improve Raven PCI host emulation
From: |
Peter Maydell |
Subject: |
Re: [Qemu-devel] [PATCH 3/3] prep: improve Raven PCI host emulation |
Date: |
Mon, 2 Sep 2013 21:59:31 +0100 |
On 23 August 2013 19:52, Hervé Poussineau <address@hidden> wrote:
> - let it load a firmware (raw or elf image)
> - add a GPIO to let it handle the non-contiguous I/O address space
> - provide a bus master address space
> Also move isa_mem_base from PCI host to machine board.
> Simplify prep board code by relying on Raven PCI host to handle
> non-contiguous I/O, and to load BIOS (with a small hack required
> for Open Hack'Ware).
That seems like quite a lot to be doing in a single patch. Does
any of it split out for easier code review?
> +static uint64_t prep_io_read(void *opaque, hwaddr addr,
> + unsigned int size)
> +{
> + PREPPCIState *s = opaque;
> + uint8_t buf[4];
> + uint64_t val;
> +
> + if (s->contiguous_map == 0) {
> + /* 64 KB contiguous space for IOs */
> + addr &= 0xFFFF;
> + } else {
> + /* 8 MB non-contiguous space for IOs */
> + addr = (addr & 0x1F) | ((addr & 0x007FFF000) >> 7);
> + }
> +
> + address_space_read(&s->pci_io_as, addr + 0x80000000, buf, size);
> + memcpy(&val, buf, size);
> + return val;
> +}
Since this is just forwarding accesses to another address
space, I'm fairly sure you could do it with a suitable collection
of alias and container memory regions.
> +
> +static void prep_io_write(void *opaque, hwaddr addr,
> + uint64_t val, unsigned int size)
> +{
> + PREPPCIState *s = opaque;
> + uint8_t buf[4];
> +
> + if (s->contiguous_map == 0) {
> + /* 64 KB contiguous space for IOs */
> + addr &= 0xFFFF;
> + } else {
> + /* 8 MB non-contiguous space for IOs */
> + addr = (addr & 0x1F) | ((addr & 0x007FFF000) >> 7);
> + }
> +
> + memcpy(buf, &val, size);
> + address_space_write(&s->pci_io_as, addr + 0x80000000, buf, size);
> +}
...if you don't do it that way, please at least factor out the
address conversion code from the read and write routines.
> +
> +static const MemoryRegionOps prep_io_ops = {
> + .read = prep_io_read,
> + .write = prep_io_write,
> + .endianness = DEVICE_LITTLE_ENDIAN,
> + .impl.max_access_size = 4,
> + .valid.unaligned = true,
> +};
> +
> static int prep_map_irq(PCIDevice *pci_dev, int irq_num)
> {
> return (irq_num + (pci_dev->devfn >> 3)) & 1;
> @@ -111,6 +175,19 @@ static void prep_set_irq(void *opaque, int irq_num, int
> level)
> qemu_set_irq(pic[irq_num] , level);
> }
>
> +static AddressSpace *raven_pcihost_set_iommu(PCIBus *bus, void *opaque,
> + int devfn)
> +{
> + PREPPCIState *s = opaque;
> + return &s->bm_as;
> +}
My versatilepb PCI DMA patches have a very similar set_iommu
callback. I wonder if we're going to end up with one PCI host controller
that actually uses the IOMMU facilities and a lot which use it as
a rather boilerplate-heavy way to pass an AddressSpace to the
core PCI code...
-- PMM