qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v2 10/11] tcg: drop global lock during TCG code ex


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [RFC v2 10/11] tcg: drop global lock during TCG code execution
Date: Wed, 25 May 2016 12:33:26 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.0


On 24/05/2016 23:28, Sergey Fedorov wrote:
> On 05/04/16 18:32, Alex Bennée wrote:
>> diff --git a/cpu-exec.c b/cpu-exec.c
>> index bd50fef..f558508 100644
>> --- a/cpu-exec.c
>> +++ b/cpu-exec.c
>> @@ -28,6 +28,7 @@
>>  #include "qemu/rcu.h"
>>  #include "exec/tb-hash.h"
>>  #include "exec/log.h"
>> +#include "qemu/main-loop.h"
>>  #if defined(TARGET_I386) && !defined(CONFIG_USER_ONLY)
>>  #include "hw/i386/apic.h"
>>  #endif
>> @@ -452,6 +453,8 @@ int cpu_exec(CPUState *cpu)
>>              for(;;) {
>>                  interrupt_request = cpu->interrupt_request;
>>                  if (unlikely(interrupt_request)) {
>> +                    qemu_mutex_lock_iothread();
>> +
>>                      if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
>>                          /* Mask out external interrupts for this step. */
>>                          interrupt_request &= ~CPU_INTERRUPT_SSTEP_MASK;
>> @@ -504,6 +507,11 @@ int cpu_exec(CPUState *cpu)
>>                             the program flow was changed */
>>                          next_tb = 0;
>>                      }
>> +
>> +                    if (qemu_mutex_iothread_locked()) {
>> +                        qemu_mutex_unlock_iothread();
>> +                    }
>> +
> 
> Why do we need to check for "qemu_mutex_iothread_locked()" here?
> iothread lock is always held here, isn't it?

Correct.

>>                  }
>>                  if (unlikely(cpu->exit_request
>>                               || replay_has_interrupt())) {
> (snip)
>> diff --git a/cputlb.c b/cputlb.c
>> index 466663b..1412049 100644
>> --- a/cputlb.c
>> +++ b/cputlb.c
>> @@ -18,6 +18,7 @@
>>   */
>>  
>>  #include "qemu/osdep.h"
>> +#include "qemu/main-loop.h"
> 
> No need in this include.
> 
>>  #include "cpu.h"
>>  #include "exec/exec-all.h"
>>  #include "exec/memory.h"
>> diff --git a/exec.c b/exec.c
>> index c46c123..9e83673 100644
>> --- a/exec.c
>> +++ b/exec.c
> (snip)
>> @@ -2347,8 +2353,14 @@ static void io_mem_init(void)
>>      memory_region_init_io(&io_mem_rom, NULL, &unassigned_mem_ops, NULL, 
>> NULL, UINT64_MAX);
>>      memory_region_init_io(&io_mem_unassigned, NULL, &unassigned_mem_ops, 
>> NULL,
>>                            NULL, UINT64_MAX);
>> +
>> +    /* io_mem_notdirty calls tb_invalidate_phys_page_fast,
>> +     * which must be called without the iothread mutex.
>> +     */
> 
> notdirty_mem_write() operates on page dirty flags. Is it safe to do so
> out of iothread lock?

Yes, see commit 5f2cb94 ("memory: make
cpu_physical_memory_sync_dirty_bitmap() fully atomic", 2014-12-02) and
those before.

>>      memory_region_init_io(&io_mem_notdirty, NULL, &notdirty_mem_ops, NULL,
>>                            NULL, UINT64_MAX);
>> +    memory_region_clear_global_locking(&io_mem_notdirty);
>> +
>>      memory_region_init_io(&io_mem_watch, NULL, &watch_mem_ops, NULL,
>>                            NULL, UINT64_MAX);
>>  }
> (snip)
>> diff --git a/translate-all.c b/translate-all.c
>> index 935d24c..0c377bb 100644
>> --- a/translate-all.c
>> +++ b/translate-all.c
>> @@ -1318,6 +1318,7 @@ void tb_invalidate_phys_range(tb_page_addr_t start, 
>> tb_page_addr_t end)
>>   * this TB.
>>   *
>>   * Called with mmap_lock held for user-mode emulation
>> + * If called from generated code, iothread mutex must not be held.
> 
> What does that mean? If iothread mutex is not required by the function
> then why mention it here at all?

If this function can take the iothread mutex itself, this would cause a
deadlock.  I'm not sure if it could though.

Another possibility is that this function can longjmp out into cpu_exec,
and then the iothread mutex would not be released.  I think this is more
likely.

Thanks,

Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]