qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] Tap: fix vcpu long time io blocking on tap


From: Wangkai (Kevin,C)
Subject: Re: [Qemu-devel] [PATCH] Tap: fix vcpu long time io blocking on tap
Date: Thu, 17 Jul 2014 03:43:33 +0000


> -----Original Message-----
> From: Stefan Hajnoczi [mailto:address@hidden
> Sent: Tuesday, July 15, 2014 11:00 PM
> To: Wangkai (Kevin,C)
> Cc: Stefan Hajnoczi; Lee yang; address@hidden;
> address@hidden
> Subject: Re: [Qemu-devel] [PATCH] Tap: fix vcpu long time io blocking
> on tap
> 
> On Mon, Jul 14, 2014 at 10:44:58AM +0000, Wangkai (Kevin,C) wrote:
> > Here the detail network:
> >
> > +--------------------------------------------+
> > | The host add tap1 and eth10 to bridge 'br1'|                     +-
> -------+
> > | +------------+                             |                     |
> send  |
> > | |   VM  eth1-+-tap1 --- bridge --- eth10 --+---------------------+
> > | | packets|
> > | +------------+                             |                     |
> |
> > +--------------------------------------------+
> > +--------------------------------------------+ +--------+
> >
> > Qemu start vm by virtio, use tap interface, option is:
> > -net nic,vlan=101, model=virtio -net
> > tap,vlan=101,ifname=tap1,script=no,downscript=no
> 
> Use the newer -netdev/-device syntax to get offload support and
> slightly better performance:
> 
> -netdev tap,id=tap0,ifname=tap1,script=no,downscript=no \ -device
> virtio-net-pci,netdev=tap0
> 
> > And add tap1 and eth10 to bridge br1 in the host:
> > Brctl addif br1 tap1
> > Brctl addif br1 eth10
> >
> > total recv 505387 time 2000925 us:
> > means call tap_send once dealing 505387 packets, the packet payload
> > was 300 bytes, and time use for tap_send() was 2,000,925
> > micro-seconds, time was measured by record time stamp at function
> tap_send() start and end.
> >
> > We just test the performance of VM.
> 
> That is 150 MB of incoming packets in a single tap_send().  Network rx
> queues are maybe a few 1000 packets so I wonder what is going on here.
> 
> Maybe more packets are arriving while QEMU is reading them and we keep
> looping.  That's strange though because the virtio-net rx virtqueue
> should fill up (it only has 256 entries).
> 
> Can you investigate more and find out exactly what is going on?  It's
> not clear yet that adding a budget is the solution or just hiding a
> deeper problem.
> 
> Stefan
[Wangkai (Kevin,C)] 

Hi Stefan,

I think I have found the problem, why 256 entries virtqueue cannot prevent
packets receiving.

I have start one SMP guest, which have 2 cores, one core was pending on
Io, and the other core was receiving the packets, and QEMU filling the
virtqueue while the guest kernel was moving the packets out from the 
queue and process.

They were racing, only if the guest got enough packets and receive slower
than QEMU sending, the virtqueue full, then finish once receive.

And I have tried -netdev/-device syntax start guest again, got very few 
Improvement.

Regards
Wangkai


reply via email to

[Prev in Thread] Current Thread [Next in Thread]