qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH] target-i386: kvm: cache KVM_GET_SUPPORTED_CPUID dat


From: Chao Peng
Subject: [Qemu-devel] [PATCH] target-i386: kvm: cache KVM_GET_SUPPORTED_CPUID data
Date: Mon, 13 Jun 2016 10:21:27 +0800

KVM_GET_SUPPORTED_CPUID ioctl is called frequently when initializing
CPU. Depends on CPU features and CPU count, the number of calls can be
extremely high which slows down QEMU booting significantly. In our
testing, we saw 5922 calls with switches:

    -cpu SandyBridge -smp 6,sockets=6,cores=1,threads=1

This ioctl takes more than 100ms, which is almost half of the total
QEMU startup time.

While for most cases the data returned from two different invocations
are not changed, that means, we can cache the data to avoid trapping
into kernel for the second time. To make sure the cache safe one
assumption is desirable: the ioctl is stateless. This is not true
however, at least for some CPUID leaves.

The good part is even the ioctl is not fully stateless, we can still
cache the return value if we know the data is unchanged for the leaves
we are interested in. Actually this should be true for most invocations
and looks all the places in current code hold true.

A non-cached version can be introduced if refresh is required in the
future.

Signed-off-by: Chao Peng <address@hidden>
---
 target-i386/kvm.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/target-i386/kvm.c b/target-i386/kvm.c
index abf50e6..1a4d751 100644
--- a/target-i386/kvm.c
+++ b/target-i386/kvm.c
@@ -107,6 +107,8 @@ static int has_xsave;
 static int has_xcrs;
 static int has_pit_state2;
 
+static struct kvm_cpuid2 *cpuid_cache;
+
 int kvm_has_pit_state2(void)
 {
     return has_pit_state2;
@@ -200,9 +202,14 @@ static struct kvm_cpuid2 *get_supported_cpuid(KVMState *s)
 {
     struct kvm_cpuid2 *cpuid;
     int max = 1;
+
+    if (cpuid_cache != NULL) {
+        return cpuid_cache;
+    }
     while ((cpuid = try_get_cpuid(s, max)) == NULL) {
         max *= 2;
     }
+    cpuid_cache = cpuid;
     return cpuid;
 }
 
@@ -320,8 +327,6 @@ uint32_t kvm_arch_get_supported_cpuid(KVMState *s, uint32_t 
function,
         ret |= cpuid_1_edx & CPUID_EXT2_AMD_ALIASES;
     }
 
-    g_free(cpuid);
-
     /* fallback for older kernels */
     if ((function == KVM_CPUID_FEATURES) && !found) {
         ret = get_para_features(s);
@@ -1090,6 +1095,12 @@ static void register_smram_listener(Notifier *n, void 
*unused)
                                  &smram_address_space, 1);
 }
 
+static Notifier kvm_exit_notifier;
+static void kvm_arch_destroy(Notifier *n, void *unused)
+{
+    g_free(cpuid_cache);
+}
+
 int kvm_arch_init(MachineState *ms, KVMState *s)
 {
     uint64_t identity_base = 0xfffbc000;
@@ -1165,6 +1176,9 @@ int kvm_arch_init(MachineState *ms, KVMState *s)
         smram_machine_done.notify = register_smram_listener;
         qemu_add_machine_init_done_notifier(&smram_machine_done);
     }
+
+    kvm_exit_notifier.notify = kvm_arch_destroy;
+    qemu_add_exit_notifier(&kvm_exit_notifier);
     return 0;
 }
 
-- 
1.8.3.1




reply via email to

[Prev in Thread] Current Thread [Next in Thread]