qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH v5 0/4] virtio: Use ioeventfd for virtqueue noti


From: Michael S. Tsirkin
Subject: [Qemu-devel] Re: [PATCH v5 0/4] virtio: Use ioeventfd for virtqueue notify
Date: Sun, 12 Dec 2010 23:09:59 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

On Sun, Dec 12, 2010 at 10:56:34PM +0200, Michael S. Tsirkin wrote:
> On Sun, Dec 12, 2010 at 10:42:28PM +0200, Michael S. Tsirkin wrote:
> > On Sun, Dec 12, 2010 at 10:41:28PM +0200, Michael S. Tsirkin wrote:
> > > On Sun, Dec 12, 2010 at 03:02:04PM +0000, Stefan Hajnoczi wrote:
> > > > See below for the v5 changelog.
> > > > 
> > > > Due to lack of connectivity I am sending from GMail.  Git should retain 
> > > > my
> > > > address@hidden From address.
> > > > 
> > > > Virtqueue notify is currently handled synchronously in userspace 
> > > > virtio.  This
> > > > prevents the vcpu from executing guest code while hardware emulation 
> > > > code
> > > > handles the notify.
> > > > 
> > > > On systems that support KVM, the ioeventfd mechanism can be used to make
> > > > virtqueue notify a lightweight exit by deferring hardware emulation to 
> > > > the
> > > > iothread and allowing the VM to continue execution.  This model is 
> > > > similar to
> > > > how vhost receives virtqueue notifies.
> > > > 
> > > > The result of this change is improved performance for userspace virtio 
> > > > devices.
> > > > Virtio-blk throughput increases especially for multithreaded scenarios 
> > > > and
> > > > virtio-net transmit throughput increases substantially.
> > > 
> > > Interestingly, I see decreased throughput for small message
> > > host to get netperf runs.
> > > 
> > > The command that I used was:
> > > netperf -H $vguest -- -m 200
> > > 
> > > And the results are:
> > > - with ioeventfd=off
> > > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 11.0.0.104 
> > > (11.0.0.104) port 0 AF_INET : demo
> > > Recv   Send    Send                          Utilization       Service 
> > > Demand
> > > Socket Socket  Message  Elapsed              Send     Recv     Send    
> > > Recv
> > > Size   Size    Size     Time     Throughput  local    remote   local   
> > > remote
> > > bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB   
> > > us/KB
> > > 
> > >  87380  16384    200    10.00      3035.48   15.50    99.30    6.695   
> > > 2.680  
> > > 
> > > - with ioeventfd=on
> > > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 11.0.0.104 
> > > (11.0.0.104) port 0 AF_INET : demo
> > > Recv   Send    Send                          Utilization       Service 
> > > Demand
> > > Socket Socket  Message  Elapsed              Send     Recv     Send    
> > > Recv
> > > Size   Size    Size     Time     Throughput  local    remote   local   
> > > remote
> > > bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB   
> > > us/KB
> > > 
> > >  87380  16384    200    10.00      1770.95   18.16    51.65    13.442  
> > > 2.389  
> > > 
> > > 
> > > Do you see this behaviour too?
> > 
> > Just a note: this is with the patchset ported to qemu-kvm.
> 
> And just another note: the trend is reversed for larged messages,
> e.g. with 1.5k messages ioeventfd=on outputforms ioeventfd=off.

Another datapoint where I see a regression is with 4000 byte messages
for guest to host traffic.

ioeventfd=off
set_up_server could not establish a listen endpoint for  port 12865 with family 
AF_UNSPEC
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 11.0.0.4 (11.0.0.4) 
port 0 AF_INET : demo
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB   us/KB

 87380  16384   4000    10.00      7717.56   98.80    15.11    1.049   2.566  

ioeventfd=on
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 11.0.0.4 (11.0.0.4) 
port 0 AF_INET : demo
Recv   Send    Send                          Utilization       Service Demand
Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
Size   Size    Size     Time     Throughput  local    remote   local   remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB   us/KB

 87380  16384   4000    10.00      3965.86   87.69    15.29    1.811   5.055  

-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]