qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: About the performance of hyper-v


From: Liang Li
Subject: Re: About the performance of hyper-v
Date: Tue, 1 Jun 2021 21:29:56 +0800

==========================
> > Analyze events for all VMs, all VCPUs:
> >              VM-EXIT    Samples  Samples%     Time%    Min Time    Max
> > Time         Avg time
> >            MSR_WRITE     924045    89.96%    81.10%      0.42us
> > 68.42us      1.26us ( +-   0.07% )
> >            DR_ACCESS      44669     4.35%     2.36%      0.32us
> > 50.74us      0.76us ( +-   0.32% )
> >   EXTERNAL_INTERRUPT      29809     2.90%     6.42%      0.66us
> > 70.75us      3.10us ( +-   0.54% )
> >               VMCALL      17819     1.73%     5.21%      0.75us
> > 15.64us      4.20us ( +-   0.33%
> >
> > Total Samples:1027227, Total events handled time:1436343.94us.
> > ===============================
> >
> > The result shows the overhead increased.  enable the apicv can help to
> > reduce the vm-exit
> > caused by interrupt injection, but on the other side, there are a lot
> > of vm-exit caused by APIC_EOI.
> >
> > When turning off the hyper-v and using the kvm apicv, there is no such
> > overhead.
>
> I think I know what's happening. We've asked Windows to use synthetic
> MSRs to access APIC (HV_APIC_ACCESS_RECOMMENDED) and this can't be
> accelerated in hardware.
>
> Could you please try the following hack (KVM):
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index c8f2592ccc99..66ee85a83e9a 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -145,6 +145,13 @@ void kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu)
>                                            vcpu->arch.ia32_misc_enable_msr &
>                                            MSR_IA32_MISC_ENABLE_MWAIT);
>         }
> +
> +       /* Dirty hack: force HV_DEPRECATING_AEOI_RECOMMENDED. Not to be 
> merged! */
> +       best = kvm_find_cpuid_entry(vcpu, HYPERV_CPUID_ENLIGHTMENT_INFO, 0);
> +       if (best) {
> +               best->eax &= ~HV_X64_APIC_ACCESS_RECOMMENDED;
> +               best->eax |= HV_DEPRECATING_AEOI_RECOMMENDED;
> +       }
>  }
>  EXPORT_SYMBOL_GPL(kvm_update_cpuid_runtime);
>
> > It seems turning on hyper V related features is not always the best
> > choice for a windows guest.
>
> Generally it is, we'll just need to make QEMU smarter when setting
> 'recommendation' bits.
>

Hi Vitaly,

I have tried your patch and found it can help to reduce the overhead.
it works as well as
the  option  "<feature policy='disable' name='hypervisor'/>" is set in
libvirt xml.

=======with your patch and stimer enabled=====
Analyze events for all VMs, all VCPUs:
             VM-EXIT    Samples  Samples%     Time%    Min Time    Max
Time         Avg time
          APIC_WRITE     172232    78.36%    68.99%      0.70us
47.71us      1.48us ( +-   0.18% )
         DR_ACCESS      19136     8.71%     4.42%      0.55us
4.42us      0.85us ( +-   0.32% )
  EXTERNAL_INTERRUPT      15921     7.24%    13.84%      0.87us
55.28us      3.21us ( +-   0.55% )
              VMCALL       6971     3.17%    10.34%      1.16us
12.02us      5.48us ( +-   0.49%
Total Samples:219802, Total events handled time:369310.30us.

===========with hypervisor disabled=========

Analyze events for all VMs, all VCPUs:
             VM-EXIT    Samples  Samples%     Time%    Min Time    Max
Time         Avg time
          APIC_WRITE     200482    78.51%    68.62%      0.64us
49.51us      1.37us ( +-   0.16% )
           DR_ACCESS      24235     9.49%     4.92%      0.55us
3.65us      0.81us ( +-   0.26% )
  EXTERNAL_INTERRUPT      17084     6.69%    13.20%      0.89us
56.38us      3.09us ( +-   0.53% )
              VMCALL       7124     2.79%     9.87%      1.26us
12.39us      5.54us ( +-   0.49% )
         EOI_INDUCED       5066     1.98%     1.36%      0.66us
2.64us      1.07us ( +-   0.25% )
      IO_INSTRUCTION        591     0.23%     1.27%      3.37us
673.23us      8.59us ( +-  13.69% )
Total Samples:255363, Total events handled time:399954.27us.


Thanks!
Liang



reply via email to

[Prev in Thread] Current Thread [Next in Thread]