[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH] mttcg: Set jmp_env to handle exit from tb_gen_c
From: |
Alex Bennée |
Subject: |
Re: [Qemu-devel] [PATCH] mttcg: Set jmp_env to handle exit from tb_gen_code |
Date: |
Tue, 21 Feb 2017 15:04:06 +0000 |
User-agent: |
mu4e 0.9.19; emacs 25.2.4 |
Pranith Kumar <address@hidden> writes:
> Alex Bennée writes:
>
>> Pranith Kumar <address@hidden> writes:
>>
>>> tb_gen_code() can exit execution using cpu_exit_loop() when it cannot
>>> allocate new tb's. To handle this, we need to properly set the jmp_env
>>> pointer ahead of calling tb_gen_code().
>>>
>>> CC:Alex Bennée <address@hidden>
>>> CC: Richard Henderson <address@hidden>
>>> Signed-off-by: Pranith Kumar <address@hidden>
>>> ---
>>> cpu-exec.c | 23 +++++++++++------------
>>> 1 file changed, 11 insertions(+), 12 deletions(-)
>>>
>>> diff --git a/cpu-exec.c b/cpu-exec.c
>>> index 97d79612d9..4b70988b24 100644
>>> --- a/cpu-exec.c
>>> +++ b/cpu-exec.c
>>> @@ -236,23 +236,22 @@ static void cpu_exec_step(CPUState *cpu)
>>>
>>> cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
>>> tb_lock();
>>> - tb = tb_gen_code(cpu, pc, cs_base, flags,
>>> - 1 | CF_NOCACHE | CF_IGNORE_ICOUNT);
>>> - tb->orig_tb = NULL;
>>> - tb_unlock();
>>> -
>>> - cc->cpu_exec_enter(cpu);
>>> -
>>
>> It occurs to me we are also diverging in our locking pattern from
>> tb_find which takes mmap_lock first. This is a NOP for system emulation
>> but needed for user-emulation (for which we can do cpu_exec_step but not
>> cpu_exec_nocache).
>
> Right. So we have to take the mmap_lock() before calling
> tb_gen_code(). However, this lock is released in the error path before calling
> cpu_loop_exit() if allocation of a new tb fails. The following is what I have
> after merging with the previous EXCP_ATOMIC handling patch.
>
> diff --git a/cpu-exec.c b/cpu-exec.c
> index a8e04bffbf..2bb3ba3672 100644
> --- a/cpu-exec.c
> +++ b/cpu-exec.c
> @@ -228,6 +228,7 @@ static void cpu_exec_nocache(CPUState *cpu, int
> max_cycles,
>
> static void cpu_exec_step(CPUState *cpu)
> {
> + CPUClass *cc = CPU_GET_CLASS(cpu);
> CPUArchState *env = (CPUArchState *)cpu->env_ptr;
> TranslationBlock *tb;
> target_ulong cs_base, pc;
> @@ -235,16 +236,24 @@ static void cpu_exec_step(CPUState *cpu)
>
> cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
> tb_lock();
> - tb = tb_gen_code(cpu, pc, cs_base, flags,
> - 1 | CF_NOCACHE | CF_IGNORE_ICOUNT);
> - tb->orig_tb = NULL;
> - tb_unlock();
> - /* execute the generated code */
> - trace_exec_tb_nocache(tb, pc);
> - cpu_tb_exec(cpu, tb);
> - tb_lock();
> - tb_phys_invalidate(tb, -1);
> - tb_free(tb);
> + if (sigsetjmp(cpu->jmp_env, 0) == 0) {
> + mmap_lock();
That gets the locking order the wrong way around - I'm wary of that.
> + tb = tb_gen_code(cpu, pc, cs_base, flags,
> + 1 | CF_NOCACHE | CF_IGNORE_ICOUNT);
> + tb->orig_tb = NULL;
> + mmap_unlock();
> + tb_unlock();
> +
> + cc->cpu_exec_enter(cpu);
> + /* execute the generated code */
> + trace_exec_tb_nocache(tb, pc);
> + cpu_tb_exec(cpu, tb);
> + cc->cpu_exec_exit(cpu);
> +
> + tb_lock();
> + tb_phys_invalidate(tb, -1);
> + tb_free(tb);
> + }
> tb_unlock();
> }
>
> diff --git a/cpus.c b/cpus.c
> index 77bba08f9a..b39408b4b1 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -1347,6 +1347,11 @@ static void *qemu_tcg_rr_cpu_thread_fn(void *arg)
> if (r == EXCP_DEBUG) {
> cpu_handle_guest_debug(cpu);
> break;
> + } else if (r == EXCP_ATOMIC) {
> + qemu_mutex_unlock_iothread();
> + cpu_exec_step_atomic(cpu);
> + qemu_mutex_lock_iothread();
> + break;
> }
> } else if (cpu->stop) {
> if (cpu->unplug) {
> @@ -1457,6 +1462,10 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
> */
> g_assert(cpu->halted);
> break;
> + case EXCP_ATOMIC:
> + qemu_mutex_unlock_iothread();
> + cpu_exec_step_atomic(cpu);
> + qemu_mutex_lock_iothread();
> default:
> /* Ignore everything else? */
> break;
--
Alex Bennée