qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V4] target/i386/kvm: Add Hyper-V direct tlb flush support


From: Vitaly Kuznetsov
Subject: Re: [PATCH V4] target/i386/kvm: Add Hyper-V direct tlb flush support
Date: Wed, 13 Nov 2019 11:19:13 +0100

Roman Kagan <address@hidden> writes:

> On Wed, Nov 13, 2019 at 10:29:00AM +0100, Vitaly Kuznetsov wrote:
>> Roman Kagan <address@hidden> writes:
>> > On Tue, Nov 12, 2019 at 11:34:27AM +0800, address@hidden wrote:
>> >> From: Tianyu Lan <address@hidden>
>> >> 
>> >> Hyper-V direct tlb flush targets KVM on Hyper-V guest.
>> >> Enable direct TLB flush for its guests meaning that TLB
>> >> flush hypercalls are handled by Level 0 hypervisor (Hyper-V)
>> >> bypassing KVM in Level 1. Due to the different ABI for hypercall
>> >> parameters between Hyper-V and KVM, KVM capabilities should be
>> >> hidden when enable Hyper-V direct tlb flush otherwise KVM
>> >> hypercalls may be intercepted by Hyper-V. Add new parameter
>> >> "hv-direct-tlbflush". Check expose_kvm and Hyper-V tlb flush
>> >> capability status before enabling the feature.
>> >> 
>> >> Signed-off-by: Tianyu Lan <address@hidden>
>> >> ---
>> >> Change since v3:
>> >>        - Fix logic of Hyper-V passthrough mode with direct
>> >>        tlb flush.
>> >> 
>> >> Change sicne v2:
>> >>        - Update new feature description and name.
>> >>        - Change failure print log.
>> >> 
>> >> Change since v1:
>> >>        - Add direct tlb flush's Hyper-V property and use
>> >>        hv_cpuid_check_and_set() to check the dependency of tlbflush
>> >>        feature.
>> >>        - Make new feature work with Hyper-V passthrough mode.
>> >> ---
>> >>  docs/hyperv.txt   | 10 ++++++++++
>> >>  target/i386/cpu.c |  2 ++
>> >>  target/i386/cpu.h |  1 +
>> >>  target/i386/kvm.c | 24 ++++++++++++++++++++++++
>> >>  4 files changed, 37 insertions(+)
>> >> 
>> >> diff --git a/docs/hyperv.txt b/docs/hyperv.txt
>> >> index 8fdf25c829..140a5c7e44 100644
>> >> --- a/docs/hyperv.txt
>> >> +++ b/docs/hyperv.txt
>> >> @@ -184,6 +184,16 @@ enabled.
>> >>  
>> >>  Requires: hv-vpindex, hv-synic, hv-time, hv-stimer
>> >>  
>> >> +3.18. hv-direct-tlbflush
>> >> +=======================
>> >> +Enable direct TLB flush for KVM when it is running as a nested
>> >> +hypervisor on top Hyper-V. When enabled, TLB flush hypercalls from L2
>> >> +guests are being passed through to L0 (Hyper-V) for handling. Due to ABI
>> >> +differences between Hyper-V and KVM hypercalls, L2 guests will not be
>> >> +able to issue KVM hypercalls (as those could be mishanled by L0
>> >> +Hyper-V), this requires KVM hypervisor signature to be hidden.
>> >
>> > On a second thought, I wonder if this is the only conflict we have.
>> >
>> > In KVM, kvm_emulate_hypercall, when sees Hyper-V hypercalls enabled,
>> > just calls kvm_hv_hypercall and returns.  I.e. once the userspace
>> > enables Hyper-V hypercalls (which QEMU does when any of hv_* flags is
>> > given), KVM treats *all* hypercalls as Hyper-V ones and handles *no* KVM
>> > hypercalls.
>> 
>> Yes, but only after guest enables Hyper-V hypercalls by writing to
>> HV_X64_MSR_HYPERCALL. E.g. if you run a Linux guest and add a couple
>> hv_* flags on the QEMU command line the guest will still be able to use
>> KVM hypercalls normally becase Linux won't enable Hyper-V hypercall
>> page.
>
> Ah, you're right.  There's no conflict indeed, the guest makes
> deliberate choice which hypercall ABI to use.
>
> Then QEMU (or KVM on its own?) should only activate this flag in evmcs
> if it sees that the guest has enabled Hyper-V hypercalls.

That was my suggestion as well when KVM patches were submitted, but if I
remember correctly Tianyu said that if we don't enable 'direct tlb
flush' flag in eVMCS on first VMLAUNCH, underlying Hyper-V won't give us
a second chance so we can't enadle it after guest writes to
HV_X64_MSR_HYPERCALL. This is a very unfortunate design/implementation.

-- 
Vitaly




reply via email to

[Prev in Thread] Current Thread [Next in Thread]