qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 02/10] softmmu_llsc_template.h: Move to multi-thre


From: alvise rigo
Subject: Re: [Qemu-devel] [RFC 02/10] softmmu_llsc_template.h: Move to multi-threading
Date: Fri, 10 Jun 2016 18:04:42 +0200

This would require to fill again the whole history which I find very
unlikely. In any case, this has to be documented.

Thank you,
alvise

On Fri, Jun 10, 2016 at 6:00 PM, Sergey Fedorov <address@hidden> wrote:
> On 10/06/16 18:53, alvise rigo wrote:
>> On Fri, Jun 10, 2016 at 5:21 PM, Sergey Fedorov <address@hidden> wrote:
>>> On 26/05/16 19:35, Alvise Rigo wrote:
>>>> Using tcg_exclusive_{lock,unlock}(), make the emulation of
>>>> LoadLink/StoreConditional thread safe.
>>>>
>>>> During an LL access, this lock protects the load access itself, the
>>>> update of the exclusive history and the update of the VCPU's protected
>>>> range.  In a SC access, the lock protects the store access itself, the
>>>> possible reset of other VCPUs' protected range and the reset of the
>>>> exclusive context of calling VCPU.
>>>>
>>>> The lock is also taken when a normal store happens to access an
>>>> exclusive page to reset other VCPUs' protected range in case of
>>>> collision.
>>> I think the key problem here is that the load in LL helper can race with
>>> a concurrent regular fast-path store. It's probably easier to annotate
>>> the source here:
>>>
>>>      1  WORD_TYPE helper_ldlink_name(CPUArchState *env, target_ulong addr,
>>>      2                                  TCGMemOpIdx oi, uintptr_t retaddr)
>>>      3  {
>>>      4      WORD_TYPE ret;
>>>      5      int index;
>>>      6      CPUState *this_cpu = ENV_GET_CPU(env);
>>>      7      CPUClass *cc = CPU_GET_CLASS(this_cpu);
>>>      8      hwaddr hw_addr;
>>>      9      unsigned mmu_idx = get_mmuidx(oi);
>>>
>>>     10      index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
>>>
>>>     11      tcg_exclusive_lock();
>>>
>>>     12      /* Use the proper load helper from cpu_ldst.h */
>>>     13      ret = helper_ld(env, addr, oi, retaddr);
>>>
>>>     14      /* hw_addr = hwaddr of the page (i.e. section->mr->ram_addr
>>> + xlat)
>>>     15       * plus the offset (i.e. addr & ~TARGET_PAGE_MASK) */
>>>     16      hw_addr = (env->iotlb[mmu_idx][index].addr &
>>> TARGET_PAGE_MASK) + addr;
>>>     17      if (likely(!(env->tlb_table[mmu_idx][index].addr_read &
>>> TLB_MMIO))) {
>>>     18          /* If all the vCPUs have the EXCL bit set for this page
>>> there is no need
>>>     19           * to request any flush. */
>>>     20          if (cpu_physical_memory_set_excl(hw_addr)) {
>>>     21              CPUState *cpu;
>>>
>>>     22              excl_history_put_addr(hw_addr);
>>>     23              CPU_FOREACH(cpu) {
>>>     24                  if (this_cpu != cpu) {
>>>     25                      tlb_flush_other(this_cpu, cpu, 1);
>>>     26                  }
>>>     27              }
>>>     28          }
>>>     29          /* For this vCPU, just update the TLB entry, no need to
>>> flush. */
>>>     30          env->tlb_table[mmu_idx][index].addr_write |= TLB_EXCL;
>>>     31      } else {
>>>     32          /* Set a pending exclusive access in the MemoryRegion */
>>>     33          MemoryRegion *mr = iotlb_to_region(this_cpu,
>>>     34
>>> env->iotlb[mmu_idx][index].addr,
>>>     35
>>> env->iotlb[mmu_idx][index].attrs);
>>>     36          mr->pending_excl_access = true;
>>>     37      }
>>>
>>>     38      cc->cpu_set_excl_protected_range(this_cpu, hw_addr, DATA_SIZE);
>>>
>>>     39      tcg_exclusive_unlock();
>>>
>>>     40      /* From now on we are in LL/SC context */
>>>     41      this_cpu->ll_sc_context = true;
>>>
>>>     42      return ret;
>>>     43  }
>>>
>>>
>>> The exclusive lock at line 11 doesn't help if concurrent fast-patch
>>> store at this address occurs after we finished load at line 13 but
>>> before TLB is flushed as a result of line 25. If we reorder the load to
>>> happen after the TLB flush request we still must be sure that the flush
>>> is complete before we can do the load safely.
>> You are right, the risk actually exists. One solution to the problem
>> could be to ignore the data acquired by the load and redo the LL after
>> the flushes have been completed (basically the disas_ctx->pc points to
>> the LL instruction). This time the LL will happen without flush
>> requests and the access will be actually protected by the lock.
>
> Yes, if some other CPU wouldn't evict an entry with the same address
> from the exclusive history...
>
> Kind regards,
> Sergey



reply via email to

[Prev in Thread] Current Thread [Next in Thread]