[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [RFC] Device isolation infrastructure v2
From: |
Alex Williamson |
Subject: |
Re: [Qemu-devel] [RFC] Device isolation infrastructure v2 |
Date: |
Thu, 15 Dec 2011 21:49:05 -0700 |
On Fri, 2011-12-16 at 12:40 +1100, David Gibson wrote:
> On Thu, Dec 15, 2011 at 11:05:07AM -0700, Alex Williamson wrote:
> > On Thu, 2011-12-15 at 17:25 +1100, David Gibson wrote:
> > > Here's the second spin of my preferred approach to handling grouping
> > > of devices for safe assignment to guests.
> > >
> > > Changes since v1:
> > > * Many name changes and file moves for improved consistency
> > > * Bugfixes and cleanups
> > > * The interface to the next layer up is considerably fleshed out,
> > > although it still needs work.
> > > * Example initialization of groups for p5ioc2 and p7ioc.
> > >
> > > TODO:
> > > * Need sample initialization of groups for intel and/or amd iommus
> >
> > I think this very well might imposed significant bloat for those
> > implementations. On POWER you typically don't have singleton groups,
> > while it's the norm on x86. I don't know that either intel or amd iommu
> > code have existing structures that they can simply tack the group
> > pointer to.
>
> Actually, I think they can probably just use the group pointer in the
> struct device. Each PCI function will typically allocate a new group
> and put the pointer in the struct device and no-where else. Devices
> hidden under bridges copy the pointer from the bridge parent instead.
> I will have to check the unplug path to ensure we can manage the group
> lifetime properly, of course.
>
> > Again, this is one of the reasons that I think the current
> > vfio implementation is the right starting point. We keep groups within
> > vfio, imposing zero overhead for systems not making use of it and only
> > require iommu drivers to implement a trivial function to opt-in. As we
> > start to make groups more pervasive in the dma layer, independent of
> > userspace driver exposure, we can offload pieces to the core. Starting
> > with it in the core and hand waving some future use that we don't plan
> > to implement right now seems like the wrong direction.
>
> Well, I think we must agree to disagree here; I think treating groups
> as identifiable objects is worthwhile. That said, I am looking for
> ways to whittle down the overhead when they're not in use.
>
> > > * Use of sysfs attributes to control group permission is probably a
> > > mistake. Although it seems a bit odd, registering a chardev for
> > > each group is probably better, because perms can be set from udev
> > > rules, just like everything else.
> >
> > I agree, this is a horrible mistake. Reinventing file permissions via
> > sysfs is bound to be broken and doesn't account for selinux permissions.
> > Again, I know you don't like aspects of the vfio group management, but
> > it gets this right imho.
>
> Yeah. I came up with this because I was trying to avoid registering a
> device whose only purpose was to act as a permissioned "handle" on the
> group. But it is a better approach, despite that. I just wanted to
> send out the new patches for comment without waiting to do that
> rework.
So we end up with a chardev created by the core, whose only purpose is
setting the group access permissions for userspace usage, which only
becomes useful with something like vfio? It's an odd conflict that
isolation groups would get involved with userspace permissions to access
the group, but leave enforcement of isolation via iommu groups to the
"binder" driver (where it seems like vfio is still going to need some
kind of merge interface to share a domain between isolation groups).
Is this same chardev going to be a generic conduit for
read/write/mmap/ioctl to the "binder" driver or does vfio need to create
it's own chardev for that? In the former case, are we ok with a chardev
that has an entirely modular API behind it, or maybe you're planning to
define some basic API infrastructure, in which case this starts smelling
like implementing vfio in the core. In the latter case (isolation
chardev + vfio chardev) coordinating permissions sounds like a mess.
Alex
- [Qemu-devel] [RFC] Device isolation infrastructure v2, David Gibson, 2011/12/15
- [Qemu-devel] [PATCH 2/3] device_isolation: Support isolation on POWER p5ioc2 bridges, David Gibson, 2011/12/15
- [Qemu-devel] [PATCH 1/3] device_isolation: Infrastructure for managing device isolation groups, David Gibson, 2011/12/15
- [Qemu-devel] [PATCH 3/3] device_isolation: Support isolation on POWER p7ioc (IODA) bridges, David Gibson, 2011/12/15
- Re: [Qemu-devel] [RFC] Device isolation infrastructure v2, Alex Williamson, 2011/12/15
- Re: [Qemu-devel] [RFC] Device isolation infrastructure v2, Joerg Roedel, 2011/12/16
- Re: [Qemu-devel] [RFC] Device isolation infrastructure v2, David Gibson, 2011/12/18
- Re: [Qemu-devel] [RFC] Device isolation infrastructure v2, Joerg Roedel, 2011/12/19
- Re: [Qemu-devel] [RFC] Device isolation infrastructure v2, David Gibson, 2011/12/20
- Re: [Qemu-devel] [RFC] Device isolation infrastructure v2, Alex Williamson, 2011/12/20
- Re: [Qemu-devel] [RFC] Device isolation infrastructure v2, Aaron Fabbri, 2011/12/21
- Re: [Qemu-devel] [RFC] Device isolation infrastructure v2, Joerg Roedel, 2011/12/21
- Re: [Qemu-devel] [RFC] Device isolation infrastructure v2, David Woodhouse, 2011/12/19
- Re: [Qemu-devel] [RFC] Device isolation infrastructure v2, David Gibson, 2011/12/19