qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH V14 2/7] Add TPM (frontend) hardware interface (


From: Stefan Berger
Subject: Re: [Qemu-devel] [PATCH V14 2/7] Add TPM (frontend) hardware interface (TPM TIS) to Qemu
Date: Mon, 05 Mar 2012 10:44:31 -0500
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.23) Gecko/20110928 Fedora/3.1.15-1.fc14 Lightning/1.0b3pre Thunderbird/3.1.15

On 03/04/2012 05:59 PM, Michael S. Tsirkin wrote:
On Fri, Mar 02, 2012 at 07:02:21AM -0500, Stefan Berger wrote:
Having instrumented the code with qemu_mutex_trylock() and a counter
for failed attempts with subsequent qemu_mutex_lock() practically
shows no lock contention at all for either polling or IRQ mode being
used by the Linux driver.

IRQ mode: 4 failed lock attempts out of 1726208 locks ->  0.00%
polling mode: 2 failed lock attempts out of 1545216 locks ->  0.00%

I used the libtpms based backend with and ran IMA and a test suite
in the VM.


     Stefan
so maybe you can just do qemu_mutex_lock_iothread similar
to what kvm does.


I have tried that now. I ended up having to remove the locking from the TIS except for one place where the result is delivered from the backend to the frontend via the thread. The reason why I had to remove the lock is because the pthread library detects a deadlock, likely due to attempted double-locking of the global lock that is not allowed to be locked more than once. That by itself is not the scary part but here's the stacktrace while I still had the lock in there:

#0  0x00007ffff37e2215 in __GI_raise (sig=6)
    at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
#1  0x00007ffff37e3b2b in __GI_abort () at abort.c:93
#2  0x00005555555c1069 in error_exit (err=<optimized out>, msg=
    0x55555583f350 "qemu_mutex_lock") at qemu-thread-posix.c:25
#3  0x00005555556daf10 in qemu_mutex_lock (mutex=<optimized out>)
    at qemu-thread-posix.c:56
#4 0x00005555557a6d43 in tpm_tis_mmio_read (opaque=0x555556e06a40, addr=3840, size=<optimized out>) at /home/stefanb/qemu/qemu-git.pt/hw/tpm_tis.c:448 #5 0x000055555575e157 in memory_region_read_accessor (opaque=<optimized out>, addr=<optimized out>, value=0x7fffef596c60, size=<optimized out>, shift=0,
    mask=4294967295) at /home/stefanb/qemu/qemu-git.pt/memory.c:259
#6  0x000055555575e270 in access_with_adjusted_size (addr=3840, value=
    0x7fffef596c60, size=4, access_size_min=<optimized out>,
    access_size_max=<optimized out>, access=
    0x55555575e120 <memory_region_read_accessor>, opaque=0x555556e06af0)
    at /home/stefanb/qemu/qemu-git.pt/memory.c:304
#7 0x0000555555762b80 in memory_region_dispatch_read1 (size=4, addr=3840, mr=
    0x555556e06af0) at /home/stefanb/qemu/qemu-git.pt/memory.c:928
#8  memory_region_dispatch_read (size=4, addr=3840, mr=0x555556e06af0)
    at /home/stefanb/qemu/qemu-git.pt/memory.c:960
#9  io_mem_read (io_index=<optimized out>, addr=3840, size=4)
    at /home/stefanb/qemu/qemu-git.pt/memory.c:1558
#10 0x000055555573a876 in cpu_physical_memory_rw (addr=4275310336, buf=
    0x7ffff7ff4028 "", len=4, is_write=0)
    at /home/stefanb/qemu/qemu-git.pt/exec.c:3620
#11 0x0000555555755225 in kvm_cpu_exec (env=0x555556561100)
    at /home/stefanb/qemu/qemu-git.pt/kvm-all.c:1173
#12 0x0000555555731201 in qemu_kvm_cpu_thread_fn (arg=0x555556561100)
    at /home/stefanb/qemu/qemu-git.pt/cpus.c:732
#13 0x00007ffff64a0b41 in start_thread (arg=0x7fffef597700)
    at pthread_create.c:305
#14 0x00007ffff388c49d in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

It may say qemu_mutex_lock but it is really calling qemu_mutex_lock_iothread. The initial lock happends in kvm_cpu_exec (#11). The subsequent lock then occurs in tpm_tis_mmio_read (#4), so far away from the first lock. Well, if something in the code there changes we may end up having a race in the TPM TIS driver if we end up factoring the lock out. So personally I would prefer local locking where we can withstand changes in other parts of the code unless one can be sure that that global lock is never going to go away and that all code paths reaching the TIS driver function that need the locking have that global lock held.

    Stefan




reply via email to

[Prev in Thread] Current Thread [Next in Thread]