qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [virtio-dev] Re: [PATCH] qemu: Introduce VIRTIO_NET_F_S


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [virtio-dev] Re: [PATCH] qemu: Introduce VIRTIO_NET_F_STANDBY feature bit to virtio_net
Date: Tue, 26 Jun 2018 16:54:51 +0300

On Tue, Jun 26, 2018 at 01:55:09PM +0200, Cornelia Huck wrote:
> On Tue, 26 Jun 2018 04:46:03 +0300
> "Michael S. Tsirkin" <address@hidden> wrote:
> 
> > On Mon, Jun 25, 2018 at 11:55:12AM +0200, Cornelia Huck wrote:
> > > On Fri, 22 Jun 2018 22:05:50 +0300
> > > "Michael S. Tsirkin" <address@hidden> wrote:
> > >   
> > > > On Fri, Jun 22, 2018 at 05:09:55PM +0200, Cornelia Huck wrote:  
> > > > > On Thu, 21 Jun 2018 21:20:13 +0300
> > > > > "Michael S. Tsirkin" <address@hidden> wrote:
> > > > >     
> > > > > > On Thu, Jun 21, 2018 at 04:59:13PM +0200, Cornelia Huck wrote:    
> > > > > > > OK, so what about the following:
> > > > > > > 
> > > > > > > - introduce a new feature bit, VIRTIO_NET_F_STANDBY_UUID that 
> > > > > > > indicates
> > > > > > >   that we have a new uuid field in the virtio-net config space
> > > > > > > - in QEMU, add a property for virtio-net that allows to specify a 
> > > > > > > uuid,
> > > > > > >   offer VIRTIO_NET_F_STANDBY_UUID if set
> > > > > > > - when configuring, set the property to the group UUID of the 
> > > > > > > vfio-pci
> > > > > > >   device
> > > > > > > - in the guest, use the uuid from the virtio-net device's config 
> > > > > > > space
> > > > > > >   if applicable; else, fall back to matching by MAC as done today
> > > > > > > 
> > > > > > > That should work for all virtio transports.      
> > > > > > 
> > > > > > True. I'm a bit unhappy that it's virtio net specific though
> > > > > > since down the road I expect we'll have a very similar feature
> > > > > > for scsi (and maybe others).
> > > > > > 
> > > > > > But we do not have a way to have fields that are portable
> > > > > > both across devices and transports, and I think it would
> > > > > > be a useful addition. How would this work though? Any idea?    
> > > > > 
> > > > > Can we introduce some kind of device-independent config space area?
> > > > > Pushing back the device-specific config space by a certain value if 
> > > > > the
> > > > > appropriate feature is negotiated and use that for things like the 
> > > > > uuid?    
> > > > 
> > > > So config moves back and forth?
> > > > Reminds me of the msi vector mess we had with pci.  
> > > 
> > > Yes, that would be a bit unfortunate.
> > >   
> > > > I'd rather have every transport add a new config.  
> > > 
> > > You mean via different mechanisms?  
> > 
> > I guess so.
> 
> Is there an alternate mechanism for pci to use? (Not so familiar with
> it.)

We have a device and transport config capability.
We could add a generic config capability too.

> For ccw, this needs more thought. We already introduced two commands
> for reading/writing the config space (a concept that does not really
> exist on s390). There's the generic read configuration data command,
> but the data returned by it is not really generic enough. So we would
> need one new command (or two, if we need to write as well). I'm not
> sure about that yet.
> 
> > 
> > > >   
> > > > > But regardless of that, I'm not sure whether extending this approach 
> > > > > to
> > > > > other device types is the way to go. Tying together two different
> > > > > devices is creating complicated situations at least in the hypervisor
> > > > > (even if it's fairly straightforward in the guest). [I have not come
> > > > > around again to look at the "how to handle visibility in QEMU"
> > > > > questions due to lack of cycles, sorry about that.]
> > > > > 
> > > > > So, what's the goal of this approach? Only to allow migration with
> > > > > vfio-pci, or also to plug in a faster device and use it instead of an
> > > > > already attached paravirtualized device?    
> > > > 
> > > > These are two sides of the same coin, I think the second approach
> > > > is closer to what we are doing here.  
> > > 
> > > Thinking about it, do we need any knob to keep the vfio device
> > > invisible if the virtio device is not present? IOW, how does the
> > > hypervisor know that the vfio device is supposed to be paired with a
> > > virtio device? It seems we need an explicit tie-in.  
> > 
> > If we are going the way of the bridge, both bridge and
> > virtio would have some kind of id.
> 
> So the presence of the id would indicate "this is one part of a pair"?

I guess so, yes.

> > 
> > When pairing using mac, I'm less sure. PAss vfio device mac to qemu
> > as a property?
> 
> That feels a bit odd. "This is the vfio device's mac, use this instead
> of your usual mac property"? As we have not designed the QEMU interface
> yet, just go with the id in any case? The guest can still match by mac.

OK

> > > > > What about migration of vfio devices that are not easily replaced by a
> > > > > paravirtualized device? I'm thinking of vfio-ccw, where our main (and
> > > > > currently only) supported device is dasd (disks) -- which can do a lot
> > > > > of specialized things that virtio-blk does not support (and should not
> > > > > or even cannot support).    
> > > > 
> > > > But maybe virtio-scsi can?  
> > > 
> > > I don't think so. Dasds have some channel commands that don't map
> > > easily to scsi commands.  
> > 
> > There's always a choice of adding these to the spec.
> > E.g. FC extensions were proposed, I don't remember why they
> > are still stuck.
> 
> FC extensions are a completely different kind of enhancements, though.
> For a start, they are not unique to a certain transport.
> 
> Also, we have a whole list of special dasd issues. Weird disk layout
> for eckd, low-level disk formatting, etc. (See the list of commands in
> drivers/s390/block/dasd_eckd.h for an idea. There's also no public
> documentation AFAICS; https://en.wikipedia.org/wiki/ECKD does not link
> to anything interesting.) I don't think we want to cram stuff like this
> into a completely different framework.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]