qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] qemu-kvm-0.11 regression, crashes on older guests with


From: Scott Tsai
Subject: Re: [Qemu-devel] qemu-kvm-0.11 regression, crashes on older guests with virtio network
Date: Thu, 29 Oct 2009 20:00:58 +0800
User-agent: Sup/0.9

Excerpts from Mark McLoughlin's message of Thu Oct 29 17:16:43 +0800 2009:
> Assuming this is something like the virtio-net in 2.6.26, there was no
> receivable buffers support so (as Scott points out) it must be that
> we've read a packet from the tap device which is >1514 bytes (or >1524
> bytes with IFF_VNET_HDR) but the guest has not supplied buffers which
> are large enough to take it

> One thing to check is that the tap device is being initialized by
> qemu-kvm using TUNSETOFFLOAD with either zero or TUN_F_CSUM - i.e. GSO
> should not be enabled, because the guest cannot handle large GSO packets

> Another possibility is that the MTU on the bridge in the host is too
> large and that's what's causing the large packets to be sent

Using Dustin's image, I see:
        virtio_net_set_features(features: 0x00000930)
        tap_set_offload(csum: 1, tso4: 1, tso6: 1, ecn: 1)
being called and get an mtu of 1500 on virbr0 using his birdge.sh script.

virtio_net_receive2 was trying to transfer a 1534 byte packet (1524 'size' + 10 
'virtio_net_hdr')
and the guest only had 1524 bytes of space in its input descriptors.

BTW, I can also reproduce this running Dustin's image inside Fedora 11's 
qemu-0.10.6-9.fc11.x86_64.

The patch I posted earlier actually only applies to the 0.10 branch, here's a 
patch that compiles for 0.11:

>From 06aa7db0705cf747c35cbcbd09d0e37713f16fe4 Mon Sep 17 00:00:00 2001
From: Scott Tsai <address@hidden>
Date: Thu, 29 Oct 2009 10:56:12 +0800
Subject: [PATCH] virtio-net: drop large packets when no mergable_rx_bufs

Currently virtio-net calls exit(1) when it receives a large packet and
the VIRTIO_NET_F_MRG_RXBUF feature isn't set.
Change it to drop the packet instead.

see: https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/458521
---
 hw/virtio-net.c |    8 +++++++-
 hw/virtio.c     |   33 +++++++++++++++++++++++++++++++++
 2 files changed, 40 insertions(+), 1 deletions(-)

diff --git a/hw/virtio-net.c b/hw/virtio-net.c
index ce8e6cb..2e6725b 100644
--- a/hw/virtio-net.c
+++ b/hw/virtio-net.c
@@ -502,6 +502,8 @@ static int receive_filter(VirtIONet *n, const uint8_t *buf, 
int size)
     return 0;
 }
 
+int buffer_fits_in_virtqueue_top(VirtQueue *vq, int size);
+
 static ssize_t virtio_net_receive2(VLANClientState *vc, const uint8_t *buf, 
size_t size, int raw)
 {
     VirtIONet *n = vc->opaque;
@@ -518,6 +520,10 @@ static ssize_t virtio_net_receive2(VLANClientState *vc, 
const uint8_t *buf, size
     hdr_len = n->mergeable_rx_bufs ?
         sizeof(struct virtio_net_hdr_mrg_rxbuf) : sizeof(struct 
virtio_net_hdr);
 
+    /* drop packet instead of truncating it */
+    if (!n->mergeable_rx_bufs && !buffer_fits_in_virtqueue_top(n->rx_vq, 
hdr_len + size))
+        return;
+
     offset = i = 0;
 
     while (offset < size) {
@@ -531,7 +537,7 @@ static ssize_t virtio_net_receive2(VLANClientState *vc, 
const uint8_t *buf, size
             virtqueue_pop(n->rx_vq, &elem) == 0) {
             if (i == 0)
                 return -1;
-            fprintf(stderr, "virtio-net truncating packet\n");
+            fprintf(stderr, "virtio-net truncating packet: mergable_rx_bufs: 
%d\n", n->mergeable_rx_bufs);
             exit(1);
         }
 
diff --git a/hw/virtio.c b/hw/virtio.c
index 41e7ca2..d9e0353 100644
--- a/hw/virtio.c
+++ b/hw/virtio.c
@@ -356,6 +356,39 @@ int virtqueue_avail_bytes(VirtQueue *vq, int in_bytes, int 
out_bytes)
     return 0;
 }
 
+/* buffer_fits_in_virtqueue_top: returns true if a 'size' byte buffer could 
fit in the
+ * input descriptors that virtqueue_pop() would have returned
+ */
+int buffer_fits_in_virtqueue_top(VirtQueue *vq, int size);
+
+int buffer_fits_in_virtqueue_top(VirtQueue *vq, int size)
+{
+    unsigned int i, max;
+    int input_iov_len_sum;
+    target_phys_addr_t desc_pa;
+
+    if (!virtqueue_num_heads(vq, vq->last_avail_idx))
+        return 0;
+
+    desc_pa = vq->vring.desc;
+    max = vq->vring.num;
+    i = virtqueue_get_head(vq, vq->last_avail_idx);
+
+    if (vring_desc_flags(desc_pa, i) & VRING_DESC_F_INDIRECT) {
+        /* loop over the indirect descriptor table */
+        max = vring_desc_len(desc_pa, i) / sizeof(VRingDesc);
+        desc_pa = vring_desc_addr(desc_pa, i);
+        i = 0;
+    }
+
+    input_iov_len_sum = 0;
+    do {
+        if (vring_desc_flags(desc_pa, i) & VRING_DESC_F_WRITE)
+            input_iov_len_sum += vring_desc_len(desc_pa, i);
+    } while ((i = virtqueue_next_desc(desc_pa, i, max)) != vq->vring.num);
+    return input_iov_len_sum >= size;
+}
+
 int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem)
 {
     unsigned int i, head, max;
-- 
1.6.2.5




reply via email to

[Prev in Thread] Current Thread [Next in Thread]