qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/2] virtio: add vhost-user-fs base device


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH 1/2] virtio: add vhost-user-fs base device
Date: Tue, 17 Sep 2019 10:21:41 +0100
User-agent: Mutt/1.12.1 (2019-06-15)

* Stefan Hajnoczi (address@hidden) wrote:
> On Wed, Aug 21, 2019 at 08:11:18PM +0100, Dr. David Alan Gilbert wrote:
> > * Michael S. Tsirkin (address@hidden) wrote:
> > > On Fri, Aug 16, 2019 at 03:33:20PM +0100, Dr. David Alan Gilbert (git) 
> > > wrote:
> > > > +static void vuf_handle_output(VirtIODevice *vdev, VirtQueue *vq)
> > > > +{
> > > > +    /* Do nothing */
> > > 
> > > Why is this safe?  Is this because this never triggers?  assert(0) then?
> > > If it triggers then backend won't be notified, which might
> > > cause it to get stuck.
> > 
> > We never process these queues in qemu - always in the guest; so am I
> > correct in thinking those shouldn't be used?
> 
> s/guest/vhost-user backend process/
> 
> vuf_handle_output() should never be called.

It turns out it does get called in one case during cleanup, in the case
where the daemon died before qemu,  virtio_bus_cleanup_host_notifier goes
around the notifiers and calls all the ones where there's anything left
in the eventfd.

Dave

> > > > +}
> > > > +
> > > > +static void vuf_guest_notifier_mask(VirtIODevice *vdev, int idx,
> > > > +                                            bool mask)
> > > > +{
> > > > +    VHostUserFS *fs = VHOST_USER_FS(vdev);
> > > > +
> > > > +    vhost_virtqueue_mask(&fs->vhost_dev, vdev, idx, mask);
> > > > +}
> > > > +
> > > > +static bool vuf_guest_notifier_pending(VirtIODevice *vdev, int idx)
> > > > +{
> > > > +    VHostUserFS *fs = VHOST_USER_FS(vdev);
> > > > +
> > > > +    return vhost_virtqueue_pending(&fs->vhost_dev, idx);
> > > > +}
> > > > +
> > > > +static void vuf_device_realize(DeviceState *dev, Error **errp)
> > > > +{
> > > > +    VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > > +    VHostUserFS *fs = VHOST_USER_FS(dev);
> > > > +    unsigned int i;
> > > > +    size_t len;
> > > > +    int ret;
> > > > +
> > > > +    if (!fs->conf.chardev.chr) {
> > > > +        error_setg(errp, "missing chardev");
> > > > +        return;
> > > > +    }
> > > > +
> > > > +    if (!fs->conf.tag) {
> > > > +        error_setg(errp, "missing tag property");
> > > > +        return;
> > > > +    }
> > > > +    len = strlen(fs->conf.tag);
> > > > +    if (len == 0) {
> > > > +        error_setg(errp, "tag property cannot be empty");
> > > > +        return;
> > > > +    }
> > > > +    if (len > sizeof_field(struct virtio_fs_config, tag)) {
> > > > +        error_setg(errp, "tag property must be %zu bytes or less",
> > > > +                   sizeof_field(struct virtio_fs_config, tag));
> > > > +        return;
> > > > +    }
> > > > +
> > > > +    if (fs->conf.num_queues == 0) {
> > > > +        error_setg(errp, "num-queues property must be larger than 0");
> > > > +        return;
> > > > +    }
> > > 
> > > The strange thing is that actual # of queues is this number + 2.
> > > And this affects an optimal number of vectors (see patch 2).
> > > Not sure what a good solution is - include the
> > > mandatory queues in the number?
> > > Needs to be documented in some way.
> > 
> > Should we be doing nvectors the same way virtio-scsi-pci does it;
> > with a magic 'unspecified' default where it sets the nvectors based on
> > the number of queues?
> > 
> > I think my preference is not to show the users the mandatory queues.
> 
> I agree.  Users want to control multiqueue, not on the absolute number
> of virtqueues including mandatory queues.
> 
> > > > +
> > > > +    if (!is_power_of_2(fs->conf.queue_size)) {
> > > > +        error_setg(errp, "queue-size property must be a power of 2");
> > > > +        return;
> > > > +    }
> > > 
> > > Hmm packed ring allows non power of 2 ...
> > > We need to look into a generic helper to support VQ
> > > size checks.
> > 
> > Which would also have to include the negotiation of where it's doing
> > packaged ring?
> 
> It's impossible to perform this check at .realize() time since the
> packed virtqueue layout is negotiated via a VIRTIO feature bit.  This
> puts us in the awkward position of either failing when the guest has
> already booted or rounding up the queue size for split ring layouts
> (with a warning message?).


--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]