|
From: | Anthony Liguori |
Subject: | Re: [Qemu-devel] Re: [PATCH] fix smp with tcg mode and --enable-io-thread |
Date: | Wed, 23 Jun 2010 11:19:47 -0500 |
User-agent: | Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100423 Lightning/1.0b1 Thunderbird/3.0.4 |
On 06/23/2010 02:42 AM, Jan Kiszka wrote:
Jan Kiszka wrote:Marcelo Tosatti wrote:On Mon, Jun 21, 2010 at 10:58:32PM +0200, Jan Kiszka wrote:Jan Kiszka wrote:Marcelo Tosatti wrote:Clear exit_request when iothread grabs the global lock. Signed-off-by: Marcelo Tosatti<address@hidden> diff --git a/cpu-exec.c b/cpu-exec.c index 026980a..74cb973 100644 --- a/cpu-exec.c +++ b/cpu-exec.c @@ -236,10 +236,8 @@ int cpu_exec(CPUState *env1) asm(""); env = env1; - if (exit_request) { + if (exit_request) env->exit_request = 1; - exit_request = 0; - }Coding style...#if defined(TARGET_I386) if (!kvm_enabled()) { diff --git a/cpus.c b/cpus.c index fcd0f09..ef1ab22 100644 --- a/cpus.c +++ b/cpus.c @@ -598,6 +598,7 @@ void qemu_mutex_lock_iothread(void) } qemu_mutex_unlock(&qemu_fair_mutex); } + exit_request = 0; } void qemu_mutex_unlock_iothread(void)I looked into this a bit as well, and that's what I also have in my queue. But things are still widely broken: pause_all_vcpus and run_on_cpu as there is no guarantee that all VCPUs regularly call into qemu_wait_io_event. Also breakpoints don't work, not only in SMP mode.This fixes pause for me:Partially. It caused regressions on the SMP scheduling without the early loop exit in my patch. I will break up my changes later today and post them as series.After fixing the APIC/IOAPIC fallouts, the series is almost done. Unfortunately, host&guest debugging is totally broken for CONFIG_IOTHREAD (I also noticed that [1] is still not merged).
Did it not get applied to uq/master or has there just not been a merge request yet?
Regards, Anthony LIguori
I will try to fix this first as it may require some more refactorings. Jan [1] http://thread.gmane.org/gmane.comp.emulators.kvm.devel/52718
[Prev in Thread] | Current Thread | [Next in Thread] |