qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] kvm crashes with spice while loading qxl


From: Gerd Hoffmann
Subject: Re: [Qemu-devel] kvm crashes with spice while loading qxl
Date: Tue, 26 Apr 2011 10:53:04 +0200
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Gecko/20110318 Red Hat/3.1.9-3.el6_0 Thunderbird/3.1.9

  Hi,

[ ... back online now ... ]

/var/tmp/portage/app-emulation/qemu-kvm-0.14.0/work/qemu-kvm-0.14.0/qemu-kvm.c:1724:
kvm_mutex_unlock: Assertion `!cpu_single_env' failed.

That's a spice bug. In fact, there are a lot of
qemu_mutex_lock/unlock_iothread in that subsystem. I bet at least a few
of them can cause even more subtle problems.

Two general issues with dropping the global mutex like this:
  - The caller of mutex_unlock is responsible for maintaining
    cpu_single_env across the unlocked phase (that's related to the
    abort above).

This is true for qemu-kvm only, right?

qemu-kvm specific patches which add the cpu_single_env tracking (not polished yet) are here:

http://cgit.freedesktop.org/spice/qemu/log/?h=spice.kvm.v28

  - Dropping the lock in the middle of a callback is risky. That may
    enable re-entrances of code sections that weren't designed for this

Hmm, indeed.

Spice requires a careful review regarding such issues. Or it should
pioneer with introducing its own lock so that we can handle at least
related I/O activities over the VCPUs without holding the global mutex
(but I bet it's not the simplest candidate for such a new scheme).

spice/qxl used to have its own locking scheme. That didn't work out though. spice server is threaded and calls back into qxl from spice thread context, and some of these callbacks need access to qemu data structures (display surface) and thus lock protection which covers more than just the spice subsystem.

I'll look hard again whenever I can find a way out of this (preferably drop the need for the global lock somehow). For now I'm pretty busy with the email backlog though ...

cheers,
  Gerd



reply via email to

[Prev in Thread] Current Thread [Next in Thread]