[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [RFC v2 05/11] tcg: protect TBContext with tb_lock.
From: |
Paolo Bonzini |
Subject: |
Re: [Qemu-devel] [RFC v2 05/11] tcg: protect TBContext with tb_lock. |
Date: |
Wed, 11 May 2016 14:52:52 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.7.1 |
Just a couple answers/remarks.
On 11/05/2016 14:45, Sergey Fedorov wrote:
>> diff --git a/exec.c b/exec.c
>> index 17f390e..c46c123 100644
>> --- a/exec.c
>> +++ b/exec.c
>> @@ -2111,6 +2111,9 @@ static void check_watchpoint(int offset, int len,
>> MemTxAttrs attrs, int flags)
>> continue;
>> }
>> cpu->watchpoint_hit = wp;
>> +
>> + /* Unlocked by cpu_loop_exit or cpu_resume_from_signal. */
>
> In fact, neither cpu_resume_from_signal() nor cpu_loop_exit() unlocks
> the lock by itself, it gets unlocked after sigsetjmp() returns via
> siglongjmp() back to cpu_exec(). So maybe it would be more clear to say
> something like "'tb_lock' gets unlocked after siglongjmp()"?
Yes, or "cpu_exec() unlocks tb_lock after cpu_loop_exit or
cpu_resume_from_signal". Something like that, anyway.
>> void tb_flush(CPUState *cpu)
>> {
>> #if defined(DEBUG_FLUSH)
>> @@ -1350,6 +1352,7 @@ void tb_invalidate_phys_page_range(tb_page_addr_t
>> start, tb_page_addr_t end,
>> /* we remove all the TBs in the range [start, end[ */
>> /* XXX: see if in some cases it could be faster to invalidate all
>> the code */
>> + tb_lock();
>
> Don't we need also protect a call to page_find() above? page_find()
> calls page_find_alloc() which is noted to be called with 'tb_lock' held.
Only if alloc=1; page_find calls it with alloc=0.
> However, it might depend on the way we treat 'mmap_lock' in system mode
> emulation.
It's just not there; generally speaking it's replaced with tb_lock.
>> @ -1627,6 +1636,7 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
>> target_ulong pc, cs_base;
>> uint64_t flags;
>>
>> + tb_lock();
>
> We don't have to take 'tb_lock' for nether tb_find_pc() nor
> cpu_restore_state_from_tb() because the lock does not protect from
> tb_flush() anyway. I think the lock should be taken just before the
> first call to tb_phys_invalidate() in this function.
Indeed, this dates back to when cpu_restore_state_from_tb did recompilation.
In general, I don't have a big problem with slightly bigger critical
sections than necessary, if they aren't in a hot path or they avoid
repeated lock-unlock.
Thanks,
Paolo