qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4 09/11] virtio-net: update the head descriptor


From: Jason Wang
Subject: Re: [Qemu-devel] [PATCH v4 09/11] virtio-net: update the head descriptor in a chain lastly
Date: Tue, 19 Feb 2019 21:09:33 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0


On 2019/2/19 下午6:51, Wei Xu wrote:
On Tue, Feb 19, 2019 at 03:23:01PM +0800, Jason Wang wrote:
On 2019/2/14 下午12:26, address@hidden wrote:
From: Wei Xu <address@hidden>

This is a helper for packed ring.

To support packed ring, the head descriptor in a chain should be updated
lastly since no 'avail_idx' like in packed ring to explicitly tell the
driver side that all payload is ready after having done the chain, so
the head is always visible immediately.

This patch fills the header after done all the other ones.

Signed-off-by: Wei Xu <address@hidden>

It's really odd to workaround API issue in the implementation of device.
Please introduce batched used updating helpers instead.
Can you elaborate a bit more? I don't get it as well.

The exact batch as vhost-net or dpdk pmd is not supported by userspace
backend. The change here is to keep the header descriptor updated at
last in case of a chaining descriptors and the helper might not help
too much.

Wei


Of course we can add batching support why not?

Your code assumes the device know the virtio layout specific assumption which breaks the layer. Device should not care about the actual layout.

Thanks


Thanks


---
  hw/net/virtio-net.c | 11 ++++++++++-
  1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 3f319ef..330abea 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -1251,6 +1251,8 @@ static ssize_t virtio_net_receive_rcu(NetClientState *nc, 
const uint8_t *buf,
      struct virtio_net_hdr_mrg_rxbuf mhdr;
      unsigned mhdr_cnt = 0;
      size_t offset, i, guest_offset;
+    VirtQueueElement head;
+    int head_len = 0;
      if (!virtio_net_can_receive(nc)) {
          return -1;
@@ -1328,7 +1330,13 @@ static ssize_t virtio_net_receive_rcu(NetClientState 
*nc, const uint8_t *buf,
          }
          /* signal other side */
-        virtqueue_fill(q->rx_vq, elem, total, i++);
+        if (i == 0) {
+            head_len = total;
+            head = *elem;
+        } else {
+            virtqueue_fill(q->rx_vq, elem, len, i);
+        }
+        i++;
          g_free(elem);
      }
@@ -1339,6 +1347,7 @@ static ssize_t virtio_net_receive_rcu(NetClientState *nc, 
const uint8_t *buf,
                       &mhdr.num_buffers, sizeof mhdr.num_buffers);
      }
+    virtqueue_fill(q->rx_vq, &head, head_len, 0);
      virtqueue_flush(q->rx_vq, i);
      virtio_notify(vdev, q->rx_vq);



reply via email to

[Prev in Thread] Current Thread [Next in Thread]