qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v2 0/3] intel_iommu: support scalable mode


From: Tian, Kevin
Subject: Re: [Qemu-devel] [RFC v2 0/3] intel_iommu: support scalable mode
Date: Fri, 1 Mar 2019 07:30:49 +0000

> From: Yi Sun [mailto:address@hidden
> Sent: Friday, March 1, 2019 3:13 PM
> 
> On 19-03-01 15:07:34, Peter Xu wrote:
> > On Thu, Feb 28, 2019 at 09:47:54PM +0800, Yi Sun wrote:
> > > Intel vt-d rev3.0 [1] introduces a new translation mode called
> > > 'scalable mode', which enables PASID-granular translations for
> > > first level, second level, nested and pass-through modes. The
> > > vt-d scalable mode is the key ingredient to enable Scalable I/O
> > > Virtualization (Scalable IOV) [2] [3], which allows sharing a
> > > device in minimal possible granularity (ADI - Assignable Device
> > > Interface). As a result, previous Extended Context (ECS) mode
> > > is deprecated (no production ever implements ECS).
> > >
> > > This patch set emulates a minimal capability set of VT-d scalable
> > > mode, equivalent to what is available in VT-d legacy mode today:
> > >     1. Scalable mode root entry, context entry and PASID table
> > >     2. Seconds level translation under scalable mode
> > >     3. Queued invalidation (with 256 bits descriptor)
> > >     4. Pass-through mode
> > >
> > > Corresponding intel-iommu driver support will be included in
> > > kernel 5.0:
> > >     https://www.spinics.net/lists/kernel/msg2985279.html
> > >
> > > We will add emulation of full scalable mode capability along with
> > > guest iommu driver progress later, e.g.:
> > >     1. First level translation
> > >     2. Nested translation
> > >     3. Per-PASID invalidation descriptors
> > >     4. Page request services for handling recoverable faults
> > >
> > > To verify the patches, below cases were tested according to Peter Xu's
> > > suggestions.
> > >     
> > > +---------+----------------------------------------------------------------+-----------------------
> -----------------------------------------+
> > >     |         |                      w/ Device Passthr                    
> > >      |                     w/o Device
> Passthr                         |
> > >     |         
> > > +-------------------------------+--------------------------------+-------------------------
> ------+--------------------------------+
> > >     |         | virtio-net-pci, vhost=on      | virtio-net-pci, vhost=off 
> > >      | virtio-
> net-pci, vhost=on      | virtio-net-pci, vhost=off      |
> > >     |         
> > > +-------------------------------+--------------------------------+-------------------------
> ------+--------------------------------+
> > >     |         | netperf | kernel bld | data cp| netperf | kernel bld | 
> > > data cp |
> netperf | kernel bld | data cp| netperf | kernel bld | data cp |
> > >     
> > > +---------+-------------------------------+--------------------------------+----------------------
> ---------+--------------------------------+
> > >     | Legacy  | Pass    | Pass       | Pass   | Pass    | Pass       | 
> > > Pass    | Pass    |
> Pass       | Pass   | Pass    | Pass       | Pass    |
> > >     
> > > +---------+-------------------------------+--------------------------------+----------------------
> ---------+--------------------------------+
> > >     | Scalable| Pass    | Pass       | Pass   | Pass    | Pass       | 
> > > Pass    | Pass    |
> Pass       | Pass   | Pass    | Pass       | Pass    |
> > >     
> > > +---------+-------------------------------+--------------------------------+----------------------
> ---------+--------------------------------+
> >
> > Hi, Yi,
> >
> > Thanks very much for the thorough test matrix!
> >
> Thanks for the review and comments! :)
> 
> > The last thing I'd like to confirm is have you tested device
> > assignment with v2?  And note that when you test with virtio devices
> 
> Yes, I tested a MDEV assignment which can walk the Scalable Mode
> patches flows (both kernel and qemu).

not just MDEV. You should also try physical PCI endpoint device.

> 
> > you should not need caching-mode=on (but caching-mode=on should not
> > break anyone though).
> >
> For virtio-net-pci without device assignment, I did not use
> "caching-mode=on".
> 
> > I've still got some comments here and there but it looks very good at
> > least to me overall.
> >
> > Thanks,
> >
> > --
> > Peter Xu

reply via email to

[Prev in Thread] Current Thread [Next in Thread]