qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] main-loop: Don't lock starve io-threads when ma


From: Alex Bligh
Subject: Re: [Qemu-devel] [PATCH] main-loop: Don't lock starve io-threads when main_loop_tlg has pending events
Date: Tue, 8 Oct 2013 20:48:26 +0100

On 8 Oct 2013, at 20:10, Hans de Goede wrote:

> I noticed today that current qemu master would hang as soon as Xorg starts in
> the guest when using qxl + a Linux guest. This message would be printed:
> main-loop: WARNING: I/O thread spun for 1000 iterations
> 
> And from then on the guest hangs and qemu consumes 100% cpu, bisecting pointed
> out commit 7b595f35d89d73bc69c35bf3980a89c420e8a44b:
> "aio / timers: Convert mainloop to use timeout"
> 
> After looking at that commit I had a hunch the problem might be blocking
> main_loop_wait calls being turned into non-blocking ones (and thus never
> releasing the io-lock), a debug printf confirmed this was happening at
> the moment of the hang, so I wrote this patch which fixes the hang for me
> and seems like a good idea in general.
> 
> Signed-off-by: Hans de Goede <address@hidden>
> ---
> main-loop.c | 5 +++++
> 1 file changed, 5 insertions(+)
> 
> diff --git a/main-loop.c b/main-loop.c
> index c3c9c28..921c939 100644
> --- a/main-loop.c
> +++ b/main-loop.c
> @@ -480,6 +480,11 @@ int main_loop_wait(int nonblocking)
>                                       timerlistgroup_deadline_ns(
>                                           &main_loop_tlg));
> 
> +    /* When not non-blocking always allow io-threads to acquire the lock */
> +    if (timeout != 0 && timeout_ns == 0) {
> +        timeout_ns = 1;
> +    }
> +
>     ret = os_host_main_loop_wait(timeout_ns);
>     qemu_iohandler_poll(gpollfds, ret);
> #ifdef CONFIG_SLIRP

I /think/ you might mean "if (!blocking && timeout_ns == 0)"
as timeout can be zero on a blocking call at this stage (i.e.
when there is a timer which has already expired.

I'm not entirely sure I understand the problem from your
description - I'll answer this in your subseqent message.

-- 
Alex Bligh







reply via email to

[Prev in Thread] Current Thread [Next in Thread]