qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2] qemu_mutex_iothread_locked not correctly syn


From: David Engraf
Subject: Re: [Qemu-devel] [PATCH v2] qemu_mutex_iothread_locked not correctly synchronized
Date: Wed, 25 Nov 2015 13:31:27 +0100

Hi Paolo,

please check the new version. I removed changing the iothread_locked
variable. But I still need to set the correct value of iothread_locked
when using qemu_cond_wait. This will fix my race condition on Windows
when prepare_mmio_access is called and checks if the lock is already
held by the current thread.

David


Commit afbe70535ff1a8a7a32910cc15ebecc0ba92e7da introduced the function
qemu_mutex_iothread_locked to avoid recursive locking of the iothread
lock. But the function qemu_cond_wait directly acquires the lock without
modifying iothread_locked. This leads to condition where
qemu_tcg_wait_io_event acquires the mutex by using qemu_cond_wait and
later calls tcg_exec_all > tcg_cpu_exec -> cpu_exec >
prepare_mmio_access checking if the current thread already holds the
lock. This is done by checking the variable iothread_locked which is
still 0, thus the function tries to acquire the mutex again.

The patch adds a function called qemu_cond_wait_iothread to keep
iothread_locked in sync.

Signed-off-by: David Engraf <address@hidden>
---
 cpus.c | 27 ++++++++++++++++++---------
 1 file changed, 18 insertions(+), 9 deletions(-)

diff --git a/cpus.c b/cpus.c
index 877bd70..e4d2d4b 100644
--- a/cpus.c
+++ b/cpus.c
@@ -70,6 +70,8 @@ static CPUState *next_cpu;
 int64_t max_delay;
 int64_t max_advance;

+static void qemu_cond_wait_iothread(QemuCond *cond);
+
 /* vcpu throttling controls */
 static QEMUTimer *throttle_timer;
 static unsigned int throttle_percentage;
@@ -920,7 +922,7 @@ void run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data)
     while (!atomic_mb_read(&wi.done)) {
         CPUState *self_cpu = current_cpu;

-        qemu_cond_wait(&qemu_work_cond, &qemu_global_mutex);
+        qemu_cond_wait_iothread(&qemu_work_cond);
         current_cpu = self_cpu;
     }
 }
@@ -998,11 +1000,11 @@ static void qemu_tcg_wait_io_event(CPUState *cpu)
        /* Start accounting real time to the virtual clock if the CPUs
           are idle.  */
         qemu_clock_warp(QEMU_CLOCK_VIRTUAL);
-        qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex);
+        qemu_cond_wait_iothread(cpu->halt_cond);
     }

     while (iothread_requesting_mutex) {
-        qemu_cond_wait(&qemu_io_proceeded_cond, &qemu_global_mutex);
+        qemu_cond_wait_iothread(&qemu_io_proceeded_cond);
     }

     CPU_FOREACH(cpu) {
@@ -1013,7 +1015,7 @@ static void qemu_tcg_wait_io_event(CPUState *cpu)
 static void qemu_kvm_wait_io_event(CPUState *cpu)
 {
     while (cpu_thread_is_idle(cpu)) {
-        qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex);
+        qemu_cond_wait_iothread(cpu->halt_cond);
     }

     qemu_kvm_eat_signals(cpu);
@@ -1123,7 +1125,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)

     /* wait for initial kick-off after machine start */
     while (first_cpu->stopped) {
-        qemu_cond_wait(first_cpu->halt_cond, &qemu_global_mutex);
+        qemu_cond_wait_iothread(first_cpu->halt_cond);

         /* process any pending work */
         CPU_FOREACH(cpu) {
@@ -1215,6 +1217,13 @@ bool qemu_mutex_iothread_locked(void)
     return iothread_locked;
 }

+static void qemu_cond_wait_iothread(QemuCond *cond)
+{
+    iothread_locked = false;
+    qemu_cond_wait(cond, &qemu_global_mutex);
+    iothread_locked = true;
+}
+
 void qemu_mutex_lock_iothread(void)
 {
     atomic_inc(&iothread_requesting_mutex);
@@ -1277,7 +1286,7 @@ void pause_all_vcpus(void)
     }

     while (!all_vcpus_paused()) {
-        qemu_cond_wait(&qemu_pause_cond, &qemu_global_mutex);
+        qemu_cond_wait_iothread(&qemu_pause_cond);
         CPU_FOREACH(cpu) {
             qemu_cpu_kick(cpu);
         }
@@ -1326,7 +1335,7 @@ static void qemu_tcg_init_vcpu(CPUState *cpu)
         cpu->hThread = qemu_thread_get_handle(cpu->thread);
 #endif
         while (!cpu->created) {
-            qemu_cond_wait(&qemu_cpu_cond, &qemu_global_mutex);
+            qemu_cond_wait_iothread(&qemu_cpu_cond);
         }
         tcg_cpu_thread = cpu->thread;
     } else {
@@ -1347,7 +1356,7 @@ static void qemu_kvm_start_vcpu(CPUState *cpu)
     qemu_thread_create(cpu->thread, thread_name, qemu_kvm_cpu_thread_fn,
                        cpu, QEMU_THREAD_JOINABLE);
     while (!cpu->created) {
-        qemu_cond_wait(&qemu_cpu_cond, &qemu_global_mutex);
+        qemu_cond_wait_iothread(&qemu_cpu_cond);
     }
 }

@@ -1363,7 +1372,7 @@ static void qemu_dummy_start_vcpu(CPUState *cpu)
qemu_thread_create(cpu->thread, thread_name, qemu_dummy_cpu_thread_fn, cpu,
                        QEMU_THREAD_JOINABLE);
     while (!cpu->created) {
-        qemu_cond_wait(&qemu_cpu_cond, &qemu_global_mutex);
+        qemu_cond_wait_iothread(&qemu_cpu_cond);
     }
 }

--
1.9.1



Am 25.11.2015 um 10:08 schrieb David Engraf:
Hi Paolo,

Am 24.11.2015 um 16:54 schrieb Paolo Bonzini:
On 24/11/2015 16:43, David Engraf wrote:
Commit afbe70535ff1a8a7a32910cc15ebecc0ba92e7da introduced the function
qemu_mutex_iothread_locked to avoid recursive locking of the iothread
lock. The iothread_locked variable uses the __thread attribute which
results in a per thread storage location whereas the qemu_global_mutex
is not thread specific. This leads to race conditions because the
variable is not in sync between threads.

Frankly, this makes no sense.  You're modifying the
qemu_mutex_iothread_locked function to return whether _some_ thread has
taken the mutex, but the function tells you whether _the calling_ thread
has taken the mutex (that's the point about recursive locking).  This
breaks the callers of qemu_mutex_iothread_locked completely.

The variable need not be in sync between the different threads; each
thread only needs to know about itself.  The current code works because:
 >
1) the iothread mutex is not locked before qemu_mutex_lock_iothread

2) the iothread mutex is not locked after qemu_mutex_lock_iothread

3) qemu_cond_wait doesn't matter because the thread does not run during
a qemu_cond_wait.

But this is my issue on Windows:

qemu_tcg_cpu_thread_fn
-> qemu_tcg_wait_io_event
    -> qemu_cond_wait acquires the mutex

qemu_tcg_cpu_thread_fn
-> tcg_exec_all -> tcg_cpu_exec -> cpu_exec
    -> cpu_exec ends up in calling prepare_mmio_access
       prepare_mmio_access uses qemu_mutex_iothread_locked to check if
       the lock is already held for this thread, but it returns 0
       because qemu_cond_wait doesn't set iothread_locked but the mutex
       is locked. Thus the function calls qemu_mutex_lock_iothread which
       tries to acquire the mutex again. On Windows this results in an
       assertion:

Assertion failed: mutex->owner == 0, file util/qemu-thread-win32.c, line 60

I'm using a self compiled, cross gcc-5.2.0 mingw compiler.

David

--
Mit freundlichen Grüßen/Best regards,

David Engraf
Product Engineer

SYSGO AG
Office Mainz
Am Pfaffenstein 14 / D-55270 Klein-Winternheim / Germany
Phone: +49-6136-9948-0 / Fax: +49-6136-9948-10
VoIP: SIP:address@hidden
E-mail: address@hidden / Web: http://www.sysgo.com

Handelsregister/Commercial Registry: HRB Mainz 90 HRB 8066
Vorstand/Executive Board: Knut Degen (CEO), Kai Sablotny (COO)
Aufsichtsratsvorsitzender/Supervisory Board Chairman: Pascal Bouchiat
USt-Id-Nr./VAT-Id-No.: DE 149062328



reply via email to

[Prev in Thread] Current Thread [Next in Thread]