[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [RFC 1/2] virtio: expose used buffers
From: |
Guo Zhi |
Subject: |
Re: [RFC 1/2] virtio: expose used buffers |
Date: |
Fri, 26 Aug 2022 11:05:41 +0800 (CST) |
----- Original Message -----
> From: "jasowang" <jasowang@redhat.com>
> To: "Guo Zhi" <qtxuning1999@sjtu.edu.cn>
> Cc: "eperezma" <eperezma@redhat.com>, "sgarzare" <sgarzare@redhat.com>,
> "Michael Tsirkin" <mst@redhat.com>,
> "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>
> Sent: Thursday, August 25, 2022 2:06:11 PM
> Subject: Re: [RFC 1/2] virtio: expose used buffers
> On Thu, Aug 18, 2022 at 11:13 PM Guo Zhi <qtxuning1999@sjtu.edu.cn> wrote:
>>
>> Follow VIRTIO 1.1 spec, we can only writing out a single used ring for a
>> batch of descriptors, and only notify guest when the batch of
>> descriptors have all been used.
>
> Yes, but I don't see anything that is related to the "exposing used
> buffers", it looks more like batching used buffers etc.
>
>>
>> We do that batch for tx, because the driver doesn't need to know the
>> length of tx buffer, while for rx, we don't apply the batch strategy.
>>
>> Signed-off-by: Guo Zhi <qtxuning1999@sjtu.edu.cn>
>> ---
>> hw/net/virtio-net.c | 29 ++++++++++++++++++++++++++---
>> 1 file changed, 26 insertions(+), 3 deletions(-)
>>
>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>> index dd0d056f..c8e83921 100644
>> --- a/hw/net/virtio-net.c
>> +++ b/hw/net/virtio-net.c
>> @@ -2542,8 +2542,10 @@ static int32_t virtio_net_flush_tx(VirtIONetQueue *q)
>> VirtIONet *n = q->n;
>> VirtIODevice *vdev = VIRTIO_DEVICE(n);
>> VirtQueueElement *elem;
>> + VirtQueueElement *elems[VIRTQUEUE_MAX_SIZE];
>> int32_t num_packets = 0;
>> int queue_index = vq2q(virtio_get_queue_index(q->tx_vq));
>> + size_t j;
>> if (!(vdev->status & VIRTIO_CONFIG_S_DRIVER_OK)) {
>> return num_packets;
>> }
>> @@ -2621,14 +2623,35 @@ static int32_t virtio_net_flush_tx(VirtIONetQueue *q)
>> }
>>
>> drop:
>> - virtqueue_push(q->tx_vq, elem, 0);
>> - virtio_notify(vdev, q->tx_vq);
>> - g_free(elem);
>> + if (!virtio_vdev_has_feature(vdev, VIRTIO_F_IN_ORDER)) {
>> + virtqueue_push(q->tx_vq, elem, 0);
>> + virtio_notify(vdev, q->tx_vq);
>> + g_free(elem);
>> + } else {
>> + elems[num_packets] = elem;
>> + }
>>
>> if (++num_packets >= n->tx_burst) {
>> break;
>> }
>> }
>> +
>> + if (virtio_vdev_has_feature(vdev, VIRTIO_F_IN_ORDER) && num_packets) {
>> + /**
>> + * If in order feature negotiated, devices can notify the use of a
>> batch
>> + * of buffers to the driver by only writing out a single used ring
>> entry
>> + * with the id corresponding to the head entry of the descriptor
>> chain
>> + * describing the last buffer in the batch.
>> + */
>> + virtqueue_fill(q->tx_vq, elems[num_packets - 1], 0, 0);
>
> So virtqueue_fill will call virtqueue_unmap_sg(), won't this cause
> mapping to be leaked?
>
Virtqueue_push was virtqueue_vill + virtqueue_flush already, and I will call
vq_detach_element for exposed buffers.
> And I think playing batching in device specific code seems
> sub-optimal, e.g if we want to implement in-order in block we need
> duplicate the logic here.
>
Only the device knows if the batching is possible,
We have the same problem as in vhost-kernel in this regard, and it was proposed
to do per-device there.
> Thanks
>
>> + for (j = 0; j < num_packets; j++) {
>> + g_free(elems[j]);
>> + }
>> +
>> + virtqueue_flush(q->tx_vq, num_packets);
>> + virtio_notify(vdev, q->tx_vq);
>> + }
>> +
>> return num_packets;
>> }
>>
>> --
>> 2.17.1
>>