qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 16/21] i386: track explicit 'hv-*' features enablement/dis


From: Igor Mammedov
Subject: Re: [PATCH v4 16/21] i386: track explicit 'hv-*' features enablement/disablement
Date: Fri, 12 Feb 2021 17:01:13 +0100

On Fri, 12 Feb 2021 16:19:24 +0100
Vitaly Kuznetsov <vkuznets@redhat.com> wrote:

> Igor Mammedov <imammedo@redhat.com> writes:
> 
> > On Fri, 12 Feb 2021 09:45:52 +0100
> > Vitaly Kuznetsov <vkuznets@redhat.com> wrote:
> >  
> >> Igor Mammedov <imammedo@redhat.com> writes:
> >>   
> >> > On Wed, 10 Feb 2021 17:40:28 +0100
> >> > Vitaly Kuznetsov <vkuznets@redhat.com> wrote:
> >> >    
> >> >> Sometimes we'd like to know which features were explicitly enabled and 
> >> >> which
> >> >> were explicitly disabled on the command line. E.g. it seems logical to 
> >> >> handle
> >> >> 'hv_passthrough,hv_feature=off' as "enable everything supported by the 
> >> >> host
> >> >> except for hv_feature" but this doesn't seem to be possible with the 
> >> >> current
> >> >> 'hyperv_features' bit array. Introduce 
> >> >> 'hv_features_on'/'hv_features_off'
> >> >> add-ons and track explicit enablement/disablement there.
> >> >> 
> >> >> Note, it doesn't seem to be possible to fill 'hyperv_features' array 
> >> >> during
> >> >> CPU creation time when 'hv-passthrough' is specified and we're running 
> >> >> on
> >> >> an older kernel without KVM_CAP_SYS_HYPERV_CPUID support. To get the 
> >> >> list
> >> >> of the supported Hyper-V features we need to actually create KVM VCPU 
> >> >> and
> >> >> this happens much later.    
> >> >
> >> > seems to me that we are returning back to +-feat parsing, this time only 
> >> > for
> >> > hyperv.
> >> > I'm not sure I like it back, especially considering we are going to
> >> > drop "-feat" priority for x86.
> >> >
> >> > now about impossible, see arm/kvm/virt, they create a 'sample' VCPU at 
> >> > KVM
> >> > init time to probe for some CPU features in advance. You can use similar
> >> > approach to prepare value for hyperv_features.
> >> >    
> >> 
> >> KVM_CAP_SYS_HYPERV_CPUID is supported since 5.11 and eventually it'll
> >> make it to all kernels we care about so I'd really like to avoid any
> >> 'sample' CPUs for the time being. On/off parsing looks like a much
> >> lesser evil.  
> > When minimum supported by QEMU kernel version gets there, you can remove
> > scratch CPU in QEMU (if hyperv will remain its sole user).
> >
> > writing your own property parser like in this series, is possible too
> > but it adds extra fields to track state and hard to follow logic.
> > On top it adds a lot of churn by switching hv_ features to dynamic
> > properties, which is not necessary if scratch CPU approach is used.
> >
> > Please try reusing scratch CPU approach, see
> >   kvm_arm_get_host_cpu_features()
> > for an example. You will very likely end up with simpler series,
> > compared to reinventing wheel.  
> 
> Even if I do that (and I serioulsy doubt it's going to be easier than
> just adding two 'u64's, kvm_arm_get_host_cpu_features() alone is 200
it does a lot more then what you need, kvm_arm_create_scratch_host_vcpu()
which it uses will do the job and even that could be made smaller
for hv usecase.

> lines long) this is not going to give us what we need to distinguish
> between
> 
> 'hv-passthrough,hv-evmcs'
> 
> and 
> 
> 'hv-passthrough'
> 
> when 'hv-evmcs' *is* supported by the host. When guest CPU lacks VMX we
> don't want to enable it unless it was requested explicitly (former but
> not the later).
could you elaborate more on it, i.e. why do we need to distinguish and why
do we need evmcs without VMX if user asked for it (will it be usable)

> Moreover, instead of just adding two 'u64's we're now doing an ioctl
> which can fail, be subject to limits,... Creating and destroying a CPU
> is also slow. Sorry, I hardly see how this is better, maybe just from
> 'code purity' point of view.
readable and easy to maintain code is not a thing to neglect.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]