qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 11/14] vfio-user: IOMMU support for remote device


From: Stefan Hajnoczi
Subject: Re: [PATCH v4 11/14] vfio-user: IOMMU support for remote device
Date: Mon, 20 Dec 2021 14:36:52 +0000

On Fri, Dec 17, 2021 at 08:00:35PM +0000, Jag Raman wrote:
> > On Dec 16, 2021, at 9:40 AM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
> > 
> > On Wed, Dec 15, 2021 at 10:35:35AM -0500, Jagannathan Raman wrote:
> >> Assign separate address space for each device in the remote processes.
> > 
> > If I understand correctly this isn't really an IOMMU. It's abusing the
> > IOMMU APIs to create isolated address spaces for each device. This way
> > memory regions added by the vfio-user client do not conflict when there
> > are multiple vfio-user servers.
> 
> Like you already figured out, having isolated DMA address space alone is not
> sufficient for this application, we also needed to isolate the sysmem/RAM 
> address
> space. As such, the available IOMMU APIs alone were not sufficient, so we had
> to improvise.
> 
> > 
> > Calling pci_root_bus_new() and keeping one PCI bus per VfuObject might
> > be a cleaner approach:
> > - Lets you isolate both PCI Memory Space and IO Space.
> > - Isolates the PCIDevices and their addresses on the bus.
> > - Isolates irqs.
> > - No more need to abuse the IOMMU API.
> 
> I believe we would still need to have an IOMMU. It’s because, devices use the
> pci_dma_read()/_write() functions. These functions look up the address in DMA
> address space (via pci_get_address_space() -> PCIDevice->bus_master_as ->
> PCIDevice->bus_master_enable_region -> 
> PCIDevice->bus_master_container_region).
>  bus_master_enable_region and bus_master_container_region are effectively 
> aliases
> to the DMA address space - without an IOMMU, the dma_as would be the shared
> global sysmem/RAM space (address_space_mem, please see pci_init_bus_master())

Good point, that code assumes there is a global address space. Creating
a fake IOMMU works around that assumption but it seems cleaner to
eliminate it:

  AddressSpace *pci_device_iommu_address_space(PCIDevice *dev)
  {
      ...
      if (!pci_bus_bypass_iommu(bus) && iommu_bus && iommu_bus->iommu_fn) {
          return iommu_bus->iommu_fn(bus, iommu_bus->iommu_opaque, devfn);
      }
      return &address_space_memory;
              ^^^^^^^^^^^^^^^^^^^^

When creating a PCI root bus an AddressSpace argument could be provided,
just like it already does for the address_space_memory and
address_space_io MemoryRegions. Then the hardcoded return can be
changed to something like:

  return bus->dma_address_space;

> >> @@ -332,6 +336,12 @@ static void vfu_object_register_bars(vfu_ctx_t 
> >> *vfu_ctx, PCIDevice *pdev)
> >>                          vfu_object_bar_handlers[i],
> >>                          VFU_REGION_FLAG_RW, NULL, 0, -1, 0);
> >> 
> >> +        if ((o->pci_dev->io_regions[i].type & PCI_BASE_ADDRESS_SPACE) == 
> >> 0) {
> >> +            memory_region_unref(o->pci_dev->io_regions[i].address_space);
> >> +            o->pci_dev->io_regions[i].address_space =
> >> +                remote_iommu_get_ram(o->pci_dev);
> >> +        }
> > 
> > This looks hacky. If you create a separate PCIHost for each device
> > instead then the BARs will be created in the MemoryRegion (confusingly
> > named "address_space" in the PCI code) of your choosing.
> 
> I was also not very comfortable with this - added it very grudgingly out of
> necessity. Thank god this can go away with separate bus for each device.

I talked to Kevin Wolf about having separate busses. qdev currently
requires each DeviceState to have a parent bus and each bus must have a
parent DeviceState. There is only one exception: a special check that
allows the global system bus (sysbus_get_default()) to be created
without a parent DeviceState.

This restriction probably needs to be loosened in order to support an
isolated PCIHost for each vfio-user server. The challenge is that
qdev_find_recursive() and monitor commands like device_add currently
only search the global system bus. Maybe new syntax is needed for the
multiple root bus case or the behavior of existing monitor commands
needs to be understood and extended without breaking anything.

> > 
> > Also, why is PCI Memory Space isolated via VFUIOMMU but PCI IO Space is
> > not?
> 
> If I understand correctly, the IO address space translates sysmem address to
> direct device access (such as I2C). Once we are inside a device, we already
> have access to all parts of the device (unlike RAM which sits outside the 
> device).
> So didn’t think device would go via IOMMU to access IO. Also didn’t see any
> other IOMMU translating IO address space accesses.

I reviewed how BARs are configured with VFIO:

1. When the guest writes to the vfio-pci PCIDevice's Configuration Space
   the write is forwarded to the VFIO device (i.e. vfio-user or VFIO
   kernel ioctl).

2. The vfio-user server receives the Configuration Space write and
   forwards it to pci_dev (the PCIDevice we're serving up). BAR mappings
   are updated in the vfio-user server so the BAR MemoryRegions are
   mapped/unmapped at the locations given by the guest.

This applies for both Memory and IO Space accesses.

Because this patch series does not isolate IO Space between VfuObject
instances the MemoryRegions will collide when two guests map IO Space
BARs of different devices at the same IO Space address. In other words,
vfu_object_bar_rw() uses the global address_space_io and that means
collisions can occur.

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]