qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-ppc] [RFC PATCH] smp: autodetect numbers of threa


From: Alexander Graf
Subject: Re: [Qemu-devel] [Qemu-ppc] [RFC PATCH] smp: autodetect numbers of threads per core
Date: Mon, 14 Apr 2014 13:15:04 +0200
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:24.0) Gecko/20100101 Thunderbird/24.4.0


On 15.11.13 17:58, Alexey Kardashevskiy wrote:
On 16.11.2013 0:15, Alexander Graf wrote:

Am 15.11.2013 um 00:12 schrieb Alexey Kardashevskiy <address@hidden>:

At the moment only a whole CPU core can be assigned to a KVM. Since
POWER7/8 support several threads per core, we want all threads of a core
to go to the same KVM so every time we run QEMU with -enable-kvm on
POWER, we have to add -smp X,threads=(4|8)" (4 for POWER7 and
8 for POWER8).

This patch tries to read smp_threads number from an accelerator and
falls back to the default value (1) if the accelerator did not care
to change it.

Signed-off-by: Alexey Kardashevskiy <address@hidden>
---

(!!!)

The usual question - what would be the normal way of doing this?
What does this patch break? I cannot think of anything right now.
Is this really what the user wants? On p7 you can run in no-smt, smt2
and smt4 mode. Today we simply default to no-smt. Changing defaults
is usually a bad thing.

Defaulting to 1 thread on P7 is a bad thing (other threads stay unused -
what is good about this?) and the only reason which I know why it is
still threads=1 is that it is hard to make a patch for upstream to
change this default.

threads=1 improves single-thread performance significantly. The thread itself is faster when it runs in SMT1 mode. Also we don't have to kick other threads out of the guest context, making every guest/host transition faster.

Overall, it's really just a random default. I'm not sure it makes a lot of sense to change it.

However, could we be really smart here? How does ppc64_cpu --smt=off work? It only turns off the unused vcpus, right? Is there any way we could actually not even enter unused vcpus at all? Then we could indeed always expose the maximum available number of threads to the guest and let that one decide what to do.


Alex




reply via email to

[Prev in Thread] Current Thread [Next in Thread]