qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 2/3] cpus: Fix throttling during vm_stop


From: Eric Blake
Subject: Re: [Qemu-devel] [PATCH v5 2/3] cpus: Fix throttling during vm_stop
Date: Thu, 5 Sep 2019 14:56:56 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0

On 8/26/19 5:37 AM, Yury Kotov wrote:
> Throttling thread sleeps in VCPU thread. For high throttle percentage
> this sleep is more than 10ms. E.g. for 60% - 15ms, for 99% - 990ms.
> vm_stop() kicks all VCPUs and waits for them. It's called at the end of
> migration and because of the long sleep the migration downtime might be
> more than 100ms even for downtime-limit 1ms.
> Use qemu_cond_timedwait for high percentage to wake up during vm_stop.
> 
> Signed-off-by: Yury Kotov <address@hidden>
> ---
>  cpus.c | 25 +++++++++++++++++--------
>  1 file changed, 17 insertions(+), 8 deletions(-)
> 

> @@ -790,11 +792,20 @@ static void cpu_throttle_thread(CPUState *cpu, 
> run_on_cpu_data opaque)
>  
>      pct = (double)cpu_throttle_get_percentage()/100;
>      throttle_ratio = pct / (1 - pct);
> -    sleeptime_ns = (long)(throttle_ratio * CPU_THROTTLE_TIMESLICE_NS);
> -
> -    qemu_mutex_unlock_iothread();
> -    g_usleep(sleeptime_ns / 1000); /* Convert ns to us for usleep call */
> -    qemu_mutex_lock_iothread();
> +    /* Add 1ns to fix double's rounding error (like 0.9999999...) */
> +    sleeptime_ns = (int64_t)(throttle_ratio * CPU_THROTTLE_TIMESLICE_NS + 1);

The cast to int64_t is not strictly necessary here, but doesn't hurt
(since it shows you DO know you are going from double to 64-bit int).

> +    endtime_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + sleeptime_ns;
> +    while (sleeptime_ns > 0 && !cpu->stop) {
> +        if (sleeptime_ns > SCALE_MS) {
> +            qemu_cond_timedwait(cpu->halt_cond, &qemu_global_mutex,
> +                                sleeptime_ns / SCALE_MS);
> +        } else {
> +            qemu_mutex_unlock_iothread();
> +            g_usleep(sleeptime_ns / SCALE_US);
> +            qemu_mutex_lock_iothread();
> +        }
> +        sleeptime_ns = endtime_ns - qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
> +    }

Looks reasonable.

(I wonder if an alternative approach, of doing a poll() or similar
instead of g_usleep, and using a pipe-to-self where we write to the pipe
in the same scenarios where cpu->halt_cond would be broadcast, in order
to wake up the sleeping poll in a responsive manner, would be any easier
or more efficient - but don't rewrite the patch just because of my question)

Reviewed-by: Eric Blake <address@hidden>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



reply via email to

[Prev in Thread] Current Thread [Next in Thread]