qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] rtl8139: flush queued packets when RxBufPtr is


From: Peter Lieven
Subject: Re: [Qemu-devel] [PATCH] rtl8139: flush queued packets when RxBufPtr is written
Date: Mon, 27 May 2013 12:19:16 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130510 Thunderbird/17.0.6

On 27.05.2013 10:32, Stefan Hajnoczi wrote:
On Mon, May 27, 2013 at 08:15:42AM +0200, Peter Lieven wrote:
I ocassionally have seen a probably related problem in the past. It mainly 
happend with rtl8139 under
WinXP where we most likely use rtl8139 due to lack of shipped e1000 drivers.

My question is if you see increasing dropped packets on the tap device if this 
problem occurs?

tap36     Link encap:Ethernet  HWaddr b2:84:23:c0:e2:c0
           inet6 addr: fe80::b084:23ff:fec0:e2c0/64 Scope:Link
           UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
           RX packets:5816096 errors:0 dropped:0 overruns:0 frame:0
           TX packets:3878744 errors:0 dropped:13775 overruns:0 carrier:0
           collisions:0 txqueuelen:500
           RX bytes:5161769434 (5.1 GB)  TX bytes:380415916 (380.4 MB)
My reading of the tun code is that will see TX dropped increase.  This
is because tun keeps a finite size queue of tx packets.  Since QEMU
userspace is not monitoring the tap fd anymore we'll never drain the
queue and soon enough the TX dropped counter will begin incrementing.
Ok, so this would fit.


In my case as well the only option to recover without shutting down the whole 
vServer is Live Migration
to another Node.

However, I also see this problem under qemu-kvm-1.2.0 while Oliver reported it 
does not happen there.
Yes, the patch that exposes this problem was only merged in 1.2.1.
Can you say which patch exactly? I cherry-picked some patches by hand.

Can you still reproduce the problem now that the patch has been merged
into qemu.git/master?
Unfortunately, I have no reliable way of reproducing the issue. It only happens
from time to time.

Peter




reply via email to

[Prev in Thread] Current Thread [Next in Thread]