qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-discuss] Puzzling performance comparison with KVM and Hyper-V


From: Stephan von Krawczynski
Subject: Re: [Qemu-discuss] Puzzling performance comparison with KVM and Hyper-V
Date: Tue, 21 Jul 2015 16:48:12 +0200

On Tue, 21 Jul 2015 16:16:22 +0200
Tim Bell <address@hidden> wrote:

> 
> 
> On Tue, 21 Jul 2015, Carlos Torres wrote:
> 
> > 
> > 
> > On Jul 21, 2015 5:45 AM, Tim Bell <address@hidden> wrote:
> > >
> > >  
> > >
> > > We are running a compute intensive application on a variety of virtual 
> > > machines at CERN (a subset of Spec 2006). We have found two puzzling 
> > > results
> > during this benchmarking and can’t find the root cause after significant 
> > effort.
> > >
> > >  
> > >
> > > 1.      Large virtual machines on KVM (32 cores) show a much worse 
> > > performance than smaller ones
> > >
> > > 2.      Hyper-V overhead is significantly less compared to KVM
> > >
> > >  
> > >
> > > We have tuned the KSM configuration with EPT off and CPU pinning but the 
> > > overheads remain significant.
> > >
> > >  
> > >
> > > 4 VMs 8 cores:  2.5% overhead compared to bare metal
> > >
> > > 2 VMs 16 cores: 8.4% overhead compared to bare metal
> > >
> > > 1 VM 32 cores: 12.9% overhead compared to bare metal
> > >
> > >  
> > >
> > > Running the same test using Hyper-V produced
> > >
> > >  
> > >
> > > 4 VMs 8 cores: 0.8% overhead compared to bare metal
> > >
> > > 1 VM 32 cores: 3.3% overhead compared to bare metal
> > >
> > >  
> > >
> > > Can anyone suggest how to tune KVM to get equivalent performance to 
> > > Hyper-V ?
> > >
> > >  
> > >
> > > Configuration
> > >
> > >  
> > >
> > > Hardware is Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz, SMT enabled, 
> > > 2GB/core
> > >
> > > CentOS 7 KVM hypervisor with CentOS 6 guest
> > >
> > > Windows 2012 Hyper-V hypervisor with CentOS 6 guest
> > >
> > > Benchmark is HEPSpec, the c++ subset of Spec 2006
> > >
> > > The benchmarks are run in parallel according the number of cores. Thus, 
> > > the 1x32 test runs 32 copies of the benchmark in a single VM on the 
> > > hypervisor.
> > The 4x8 test runs 4 VMs on the same hypervisor, with each VM running 8 
> > copies of the benchmark simultaneously.
> > >
> > >  
> > >
> > >  
> > >
> > >  
> > 
> > Tim,
> > 
> > This is really interesting, it reminds me of an issue we found on IBM Power 
> > hypervisor, related to the allocation by the scheduler on NUMA hardware.
> > 
> > I'm not KVM expert by any means, but I'll try to help.
> > 
> > I'm assuming power saving features are disabled, and the scaling governor 
> > on the kernel is set to performance, and that you CPU pinned the qemu/kvm
> > processes in the host to different physical CPU cores.
> >
> 
> We've set the governor to performance (via tuned as the virtual guest 
> profile). The pinning has been done as we are not overcommitting.

Have you worked out the pinning? The cpu numbers are _not_ in line with
core/SMT distribution over the physical dies.


-- 
Regards,
Stephan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]