qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-discuss] Network performance issues between VM and host


From: Mike Lovell
Subject: Re: [Qemu-discuss] Network performance issues between VM and host
Date: Thu, 09 Aug 2012 12:36:49 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:14.0) Gecko/20120714 Thunderbird/14.0

On 08/09/2012 08:46 AM, Chris Pierzycki wrote:
Hi,

I'm trying to deploy few KVM servers based on the following AMD 6xxx based motherboard:


I'm using Fedora 17 (kernel 3.4.4-5.fc17.x86_64) for the KVM host and CentOS 6 (kernel 2.6.32-220.el6.x86_64) for the VM.  Everything seems to work fine until it comes to VM's network performance.  I was able to narrow down the problem to host and the VM by running tests using iperf.  Here are the results:

# iperf -c 10.10.11.18 -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 10.10.11.18, TCP port 5001
TCP window size:  197 KByte (default)
------------------------------------------------------------
[  5] local 10.10.11.250 port 46737 connected with 10.10.11.18 port 5001
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec  2.17 GBytes  1.86 Gbits/sec
[  4] local 10.10.11.250 port 5001 connected with 10.10.11.18 port 35905
[  4]  0.0-10.0 sec  6.45 GBytes  5.54 Gbits/sec

So while the performance isn't the same in either direction and I find even  the higher 5.54 Gbps to be slow.  Switching from virtio-net to e1000 driver actually slows the connection down to ~ 280 mbps.  I've been trying to figure out the problem for a while so I used nearly all tunable options to improve the performance and while some do make a difference it's not very big.  

At home I have a budget server with Core i3 processor and the same software setup. I am able to get 16 Gbps:

# iperf -c 192.168.7.60 -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.7.60, TCP port 5001
TCP window size:  279 KByte (default)
------------------------------------------------------------
[  5] local 192.168.7.99 port 54182 connected with 192.168.7.60 port 5001
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec  19.4 GBytes  16.6 Gbits/sec
[  4] local 192.168.7.99 port 5001 connected with 192.168.7.60 port 43845
[  4]  0.0-10.0 sec  24.2 GBytes  20.8 Gbits/sec

So the little server is getting higher network performance by 4x.  I'm actually not sure what is the issue at all anymore.  I think it could relate to the following:
  • Kernel
  • Qemu
  • Bridge
On another interesting note there is a 10G Myricom card in the server and I am also getting inconsistent numbers when performing a test between two hosts with maximum of 6 Gbps.  Any help would be great.  Thank you!

i have seen somewhat similar behavior between similar types of hosts. i have a server with multiple opteron 6168 processors (12 core, 1.9Ghz) and a desktop with a i7 (quad core 3.4Ghz). in some network testing i've done on both systems, the i7 gets multiple times the bandwidth between the vms and host than the opteron does (i don't remember exact numbers). i haven't dug too deep into it myself since it was still 'good enough' but i suspect its due to the speeds of the processors. as i understand it, iperf and the qemu hypervisor (not the guest vcpus depending on version) are single threaded. they will only pass traffic as fast as a single processor can do it. the opterons have slower processors but tons of them. the i7 has a much faster processor but fewer cores. so the i7 can go through it much faster. that is definitely just my theory on why and not by any means definitive and i don't have any other backing for my theory.

i haven't had time to go back and try to tweak things to get better performance or investigate things further so i don't have anything to share about what to try next. i'm just sharing a similar observation and a theory about what it might be.

mike

reply via email to

[Prev in Thread] Current Thread [Next in Thread]