qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH] qemu-kvm: response to SIGUSR1 to start/stop a V


From: Rik van Riel
Subject: [Qemu-devel] Re: [PATCH] qemu-kvm: response to SIGUSR1 to start/stop a VCPU (v2)
Date: Wed, 01 Dec 2010 14:24:20 -0500
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.8) Gecko/20100806 Fedora/3.1.2-1.fc13 Lightning/1.0b2pre Thunderbird/3.1.2

On 12/01/2010 02:07 PM, Peter Zijlstra wrote:
On Wed, 2010-12-01 at 12:26 -0500, Rik van Riel wrote:
On 12/01/2010 12:22 PM, Peter Zijlstra wrote:

The pause loop exiting&  directed yield patches I am working on
preserve inter-vcpu fairness by round robining among the vcpus
inside one KVM guest.

I don't necessarily think that's enough.

Suppose you've got 4 vcpus, one is holding a lock and 3 are spinning.
They'll end up all three donating some time to the 4th.

The only way to make that fair again is if due to future contention the
4th cpu donates an equal amount of time back to the resp. cpus it got
time from. Guest lock patterns and host scheduling don't provide this
guarantee.

You have no guarantees when running virtualized, guest
CPU time could be taken away by another guest just as
easily as by another VCPU.

Even if we equalized the amount of CPU time each VCPU
ends up getting across some time interval, that is no
guarantee they get useful work done, or that the time
gets fairly divided to _user processes_ running inside
the guest.

The VCPU could be running something lock-happy when
it temporarily gives up the CPU, and get extra CPU time
back when running something userspace intensive.

In-between, it may well have scheduled to another task
(allowing it to get more CPU time).

I'm not convinced the kind of fairness you suggest is
possible or useful.

--
All rights reversed



reply via email to

[Prev in Thread] Current Thread [Next in Thread]