qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v3 19/19] cpu-exec: remove tb_lock from the hot-pa


From: Sergey Fedorov
Subject: Re: [Qemu-devel] [RFC v3 19/19] cpu-exec: remove tb_lock from the hot-path
Date: Thu, 30 Jun 2016 12:24:24 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.8.0

On 29/06/16 19:08, Alex Bennée wrote:
> Sergey Fedorov <address@hidden> writes:
>
>> On 29/06/16 17:47, Alex Bennée wrote:
>>> Sergey Fedorov <address@hidden> writes:
>>>
>>>> On 03/06/16 23:40, Alex Bennée wrote:
(snip)
>>>>>
>>>>>  static inline TranslationBlock *tb_find_fast(CPUState *cpu,
>>>>> -                                             TranslationBlock **last_tb,
>>>>> +                                             TranslationBlock **ltbp,
>>>> I'm not sure if it is more readable...
>>> I'll revert. I was trying to keep line lengths short :-/
>>>
>>>>>                                               int tb_exit)
>>>>>  {
>>>>>      CPUArchState *env = (CPUArchState *)cpu->env_ptr;
>>>>> -    TranslationBlock *tb;
>>>>> +    TranslationBlock *tb, *last_tb;
>>>>>      target_ulong cs_base, pc;
>>>>>      uint32_t flags;
>>>>>
>>>>> @@ -340,7 +337,6 @@ static inline TranslationBlock *tb_find_fast(CPUState 
>>>>> *cpu,
>>>>>         always be the same before a given translated block
>>>>>         is executed. */
>>>>>      cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
>>>>> -    tb_lock();
>>>>>      tb = atomic_read(&cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)]);
>>>>>      if (unlikely(!tb || tb->pc != pc || tb->cs_base != cs_base ||
>>>>>                   tb->flags != flags)) {
>>>>> @@ -350,7 +346,7 @@ static inline TranslationBlock *tb_find_fast(CPUState 
>>>>> *cpu,
>>>>>          /* Ensure that no TB jump will be modified as the
>>>>>           * translation buffer has been flushed.
>>>>>           */
>>>>> -        *last_tb = NULL;
>>>>> +        *ltbp = NULL;
>>>>>          cpu->tb_flushed = false;
>>>>>      }
>>>>>  #ifndef CONFIG_USER_ONLY
>>>>> @@ -359,14 +355,19 @@ static inline TranslationBlock 
>>>>> *tb_find_fast(CPUState *cpu,
>>>>>       * spanning two pages because the mapping for the second page can 
>>>>> change.
>>>>>       */
>>>>>      if (tb->page_addr[1] != -1) {
>>>>> -        *last_tb = NULL;
>>>>> +        *ltbp = NULL;
>>>>>      }
>>>>>  #endif
>>>>> +
>>>>>      /* See if we can patch the calling TB. */
>>>>> -    if (*last_tb && !qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN)) {
>>>>> -        tb_add_jump(*last_tb, tb_exit, tb);
>>>>> +    last_tb = *ltbp;
>>>>> +    if (!qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN) &&
>>>>> +        last_tb &&
>>>>> +        !last_tb->jmp_list_next[tb_exit]) {
>>>> If we're going to check this outside of tb_lock we have to do this with
>>>> atomic_{read,set}(). However, I think it is rare case to race on
>>>> tb_add_jump() so probably it is okay to do the check under tb_lock.
>>> It's checking for NULL, it gets re-checked in tb_add_jump while under
>>> lock so I the setting race should be OK I think.
>> I think we could just skip this check and be fine. What do you think
>> regarding this?
> I think that means we'll take the lock more frequently than we want to.
> I'll have to check.

If two blocks have already been chained then we don't go here, otherwise
the chances that some other thread has just done the chaining of the
blocks are rare, I think.

Regards,
Sergey

>>>>> +        tb_lock();
>>>>> +        tb_add_jump(last_tb, tb_exit, tb);
>>>>> +        tb_unlock();
>>>>>      }
>>>>> -    tb_unlock();
>>>>>      return tb;
>>>>>  }
>>>>>
>>>>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]