qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 17/27] exec.c: Handle IOMMUs in address_space_tr


From: Auger Eric
Subject: Re: [Qemu-devel] [PATCH 17/27] exec.c: Handle IOMMUs in address_space_translate_for_iotlb()
Date: Fri, 25 May 2018 11:50:43 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0

Hi Peter,

On 05/25/2018 10:52 AM, Peter Maydell wrote:
> On 24 May 2018 at 20:54, Auger Eric <address@hidden> wrote:
>> Hi Peter,
>>
>> On 05/23/2018 11:51 AM, Alex Bennée wrote:
>>>
>>> Peter Maydell <address@hidden> writes:
>>>
>>>> Currently we don't support board configurations that put an IOMMU
>>>> in the path of the CPU's memory transactions, and instead just
>>>> assert() if the memory region fonud in address_space_translate_for_iotlb()
>> found
>>>> is an IOMMUMemoryRegion.
>>>>
>>>> Remove this limitation by having the function handle IOMMUs.
>>>> This is mostly straightforward, but we must make sure we have
>>>> a notifier registered for every IOMMU that a transaction has
>>>> passed through, so that we can flush the TLB appropriately
>> Can you elaborate on what (TCG) TLB we are talking about?
> 
> The TCG TLB, as implemented in accel/tcg/cputlb.c. Basically
> the thing that caches the results it gets back from the memory
> system so it can fast path device and memory accesses.
> 
>> The concept of IOMMUs downstream to a CPU is not obvious to me. Maybe an
>> example may be documented in the commit message?
> 
> The MPC implemented in this patchset is an example.
> 
> 
> 
>>>> +static void tcg_iommu_unmap_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
>>>> +{
>>>> +    TCGIOMMUNotifier *notifier = container_of(n, TCGIOMMUNotifier, n);
>>>> +
>>>> +    if (!notifier->active) {
>>>> +        return;
>>>> +    }
>>>> +    tlb_flush(notifier->cpu);
>>>> +    notifier->active = false;
>>>> +    /* We leave the notifier struct on the list to avoid reallocating it 
>>>> later.
>>>> +     * Generally the number of IOMMUs a CPU deals with will be small.
>>>> +     * In any case we can't unregister the iommu notifier from a notify
>>>> +     * callback.
>>>> +     */
>> I don't get the life cycle of the notifier and why it becomes inactive
>> after the invalidate. Could you detail the specificity of this one?
> 
> Once we've flushed the TLB it is empty and will have no cached
> information from the IOMMU. So there's no point in flushing the
> TLB again (which is expensive) until the next time a transaction
> goes through the IOMMU and we're caching something from it.
Ak OK. there is no finer granularity for TLB flush?

> 
> So the cycle goes:
>  * CPU makes transaction that goes through an IOMMU
>  * in tcg_register_iommu_notifier() we register the notifier
>    if we haven't already, and make sure it's got active = true
>  * in the unmap notify, we flush the whole TLB for the CPU, and
>    set active = false
>  * repeat...
OK thank you for the explanation
> 
> 
>>>> +static void tcg_iommu_notifier_destroy(gpointer data)
>>>> +{
>>>> +    TCGIOMMUNotifier *notifier = data;
>>>> +
>>>> +    if (notifier->active) {
>>>> +        memory_region_unregister_iommu_notifier(notifier->mr, 
>>>> &notifier->n);
>>>> +    }
>> Is it safe to leave the notifier registered to an IOMMU whereas it gets
>> freed?
> 
> Oh, this is a bug, left over from my first idea (which was to
> unregister the IOMMU notifier in the notifier unmap callback,
> in which case active == true would be the only case when we
> had a registered notifier).
> 
> We should unconditionally unregister the notifier here.
> 
> 
>>>>  /* Called from RCU critical section */
>>>>  MemoryRegionSection *
>>>>  address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
>>>> -                                  hwaddr *xlat, hwaddr *plen)
>>>> +                                  hwaddr *xlat, hwaddr *plen,
>>>> +                                  MemTxAttrs attrs, int *prot)
>>>>  {
>>>>      MemoryRegionSection *section;
>>>> +    IOMMUMemoryRegion *iommu_mr;
>>>> +    IOMMUMemoryRegionClass *imrc;
>>>> +    IOMMUTLBEntry iotlb;
>>>> +    int iommu_idx;
>>>>      AddressSpaceDispatch *d = 
>>>> atomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch);
>>>>
>>>> -    section = address_space_translate_internal(d, addr, xlat, plen, 
>>>> false);
>>>> +    for (;;) {
>>>> +        section = address_space_translate_internal(d, addr, &addr, plen, 
>>>> false);
>>>> +
>>>> +        iommu_mr = memory_region_get_iommu(section->mr);
>>>> +        if (!iommu_mr) {
>>>> +            break;
>>>> +        }
>>>> +
>>>> +        imrc = memory_region_get_iommu_class_nocheck(iommu_mr);
>>>> +
>>>> +        iommu_idx = imrc->attrs_to_index(iommu_mr, attrs);
>>>> +        tcg_register_iommu_notifier(cpu, iommu_mr, iommu_idx);
>>>> +        /* We need all the permissions, so pass IOMMU_NONE so the IOMMU
>>>> +         * doesn't short-cut its translation table walk.
>> it is not clear to me why you don't use the access flag as you seem to
>> handle the perm fault after the translate() call.
> 
> We need to know all the permissions (because we'll cache the result
> in the TCG TLB and later use them for future read and write accesses),
> so we pass IOMMU_NONE.
> 
> My understanding from previous discussion is that the only
> reason to pass in some other access flag value is if you
> only care about one of read or write and want to allow the
> IOMMU to stop walking the page table early as soon as it decides
> it doesn't have permissions.

agreed. So you need to fetch the whole set of table permissions to
update the TLB. By the way where is the TLB updated?

Thanks

Eric
> 
> thanks
> -- PMM
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]