qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH] kvm/x86: enlarge number of possible CPUID leave


From: Avi Kivity
Subject: [Qemu-devel] Re: [PATCH] kvm/x86: enlarge number of possible CPUID leaves
Date: Wed, 08 Dec 2010 14:14:02 +0200
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.12) Gecko/20101103 Fedora/1.0-0.33.b2pre.fc14 Lightning/1.0b3pre Thunderbird/3.1.6

On 12/01/2010 02:55 PM, Andre Przywara wrote:
Avi Kivity wrote:
On 12/01/2010 01:17 PM, Andre Przywara wrote:
Currently the number of CPUID leaves KVM handles is limited to 40.
My desktop machine (AthlonII) already has 35 and future CPUs will
expand this well beyond the limit. Extend the limit to 80 to make
room for future processors.

Signed-off-by: Andre Przywara<address@hidden>
---
  arch/x86/include/asm/kvm_host.h |    2 +-
  1 files changed, 1 insertions(+), 1 deletions(-)

Hi,
I found that either KVM or QEMU (possibly both) are broken in respect
to handling more CPUID entries than the limit dictates. KVM will
return -E2BIG, which is the same error as if the user hasn't provided
enough space to hold all entries. Now QEMU will continue to enlarge
the allocated memory until it gets into an out-of-memory condition.
I have tried to fix this with teaching KVM how to deal with a capped
number of entries (there are some bugs in the current code), but this
will limit the number of CPUID entries KVM handles, which will surely
cut of the lastly appended PV leaves.
A proper fix would be to make this allocation dynamic. Is this a
feasible way or will this lead to issues or side-effects?


Well, there has to be some limit, or userspace can allocate unbounded kernel memory.
But this limit should not be a compile-time constant, but a runtime one. The needed size depends on the host CPU (plus the KVM PV leaves) and thus could be determined once for all VMs and vCPUs at module load-time. But then we cannot use the static array allocation we currently have in struct kvm_vcpu_arch:
    struct kvm_cpuid_entry2 cpuid_entries[KVM_MAX_CPUID_ENTRIES];
So we would use a kind-of dynamic allocation bounded by the host CPU's need. But for the code is does not make much difference to a "real" dynamic allocation.

Also we could implement kvm_dev_ioctl_get_supported_cpuid without the vmalloc, if we don't care about some dozens of copy_to_user() calls in this function. Then we would not need this limit in GET_SUPPORTED_CPUID at all, but it will strike us again at KVM_SET_CPUID[2], where we may not fulfill the promises we gave earlier.
Having said this, what about that:
kvm_dev_ioctl_get_supported_cpuid is invariant to the VM or vCPU (as it is used by a system ioctl), so it could be run once at initialization, which would limit the ioctl implementation to a plain bounded copy. Would you want such a patch (removing the vmalloc and maybe even the limit)?

Making GET_SUPPORTED_CPUID data static would be an improvement, yes.

--
error compiling committee.c: too many arguments to function




reply via email to

[Prev in Thread] Current Thread [Next in Thread]