[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [RFC PATCH v9 20/23] vdpa: Buffer CVQ support on shadow virtqueue
From: |
Eugenio Perez Martin |
Subject: |
Re: [RFC PATCH v9 20/23] vdpa: Buffer CVQ support on shadow virtqueue |
Date: |
Thu, 14 Jul 2022 19:37:06 +0200 |
On Thu, Jul 14, 2022 at 9:04 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Thu, Jul 14, 2022 at 2:54 PM Eugenio Perez Martin
> <eperezma@redhat.com> wrote:
> >
> > > > > +static void vhost_vdpa_net_handle_ctrl_used(VhostShadowVirtqueue
> > > > > *svq,
> > > > > + void *vq_elem_opaque,
> > > > > + uint32_t dev_written)
> > > > > +{
> > > > > + g_autoptr(CVQElement) cvq_elem = vq_elem_opaque;
> > > > > + virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
> > > > > + const struct iovec out = {
> > > > > + .iov_base = cvq_elem->out_data,
> > > > > + .iov_len = cvq_elem->out_len,
> > > > > + };
> > > > > + const DMAMap status_map_needle = {
> > > > > + .translated_addr = (hwaddr)(uintptr_t)cvq_elem->in_buf,
> > > > > + .size = sizeof(status),
> > > > > + };
> > > > > + const DMAMap *in_map;
> > > > > + const struct iovec in = {
> > > > > + .iov_base = &status,
> > > > > + .iov_len = sizeof(status),
> > > > > + };
> > > > > + g_autofree VirtQueueElement *guest_elem = NULL;
> > > > > +
> > > > > + if (unlikely(dev_written < sizeof(status))) {
> > > > > + error_report("Insufficient written data (%llu)",
> > > > > + (long long unsigned)dev_written);
> > > > > + goto out;
> > > > > + }
> > > > > +
> > > > > + in_map = vhost_iova_tree_find_iova(svq->iova_tree,
> > > > > &status_map_needle);
> > > > > + if (unlikely(!in_map)) {
> > > > > + error_report("Cannot locate out mapping");
> > > > > + goto out;
> > > > > + }
> > > > > +
> > > > > + switch (cvq_elem->ctrl.class) {
> > > > > + case VIRTIO_NET_CTRL_MAC_ADDR_SET:
> > > > > + break;
> > > > > + default:
> > > > > + error_report("Unexpected ctrl class %u",
> > > > > cvq_elem->ctrl.class);
> > > > > + goto out;
> > > > > + };
> > > > > +
> > > > > + memcpy(&status, cvq_elem->in_buf, sizeof(status));
> > > > > + if (status != VIRTIO_NET_OK) {
> > > > > + goto out;
> > > > > + }
> > > > > +
> > > > > + status = VIRTIO_NET_ERR;
> > > > > + virtio_net_handle_ctrl_iov(svq->vdev, &in, 1, &out, 1);
> > > >
> > > >
> > > > I wonder if this is the best choice. It looks to me it might be better
> > > > to extend the virtio_net_handle_ctrl_iov() logic:
> > > >
> > > > virtio_net_handle_ctrl_iov() {
> > > > if (svq enabled) {
> > > > host_elem = iov_copy(guest_elem);
> > > > vhost_svq_add(host_elem);
> > > > vhost_svq_poll(host_elem);
> > > > }
> > > > // usersapce ctrl vq logic
> > > > }
> > > >
> > > >
> > > > This can help to avoid coupling too much logic in cvq (like the
> > > > avail,used and detach ops).
> > > >
> > >
> > > Let me try that way and I'll come back to you.
> > >
> >
> > The problem with that approach is that virtio_net_handle_ctrl_iov is
> > called from the SVQ used handler. How could we call it otherwise? I
> > find it pretty hard to do unless we return SVQ to the model where we
> > used VirtQueue.handle_output, discarded long ago.
>
> I'm not sure I get this. Can we simply let the cvq to be trapped as
> the current userspace datapath did?
>
Sending a very early draft RFC with that method, so we can compare if
it is worth the trouble
Thanks!
[RFC PATCH v9 23/23] vdpa: Add x-svq to NetdevVhostVDPAOptions, Eugenio Pérez, 2022/07/06