qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 4/5] exec: allow to get a pointer for some mmio me


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [RFC 4/5] exec: allow to get a pointer for some mmio memory region
Date: Sat, 4 Feb 2017 04:41:36 -0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.5.1


On 03/02/2017 13:09, Frederic Konrad wrote:
> On 02/03/2017 06:26 PM, Paolo Bonzini wrote:
>>
>>
>> On 03/02/2017 09:06, address@hidden wrote:
>>> +    host = mr->ops->request_ptr(mr->opaque, addr - mr->addr, &size, 
>>> &offset);
>>> +
>>> +    if (!host || !size) {
>>> +        memory_region_transaction_commit();
>>> +        return false;
>>> +    }
>>> +
>>> +    sub = g_new(MemoryRegion, 1);
>>> +    memory_region_init_ram_ptr(sub, OBJECT(mr), "mmio-map", size, host);
>>> +    memory_region_add_subregion(mr, offset, sub);
>>> +    memory_region_transaction_commit();
>>> +    return true;
>>> +}
>>> +
>>> +void memory_region_invalidate_mmio_ptr(MemoryRegion *mr, hwaddr offset,
>>> +                                       unsigned size)
>>> +{
>>> +    MemoryRegionSection section = memory_region_find(mr, offset, size);
>>> +
>>> +    if (section.mr != mr) {
>>> +        memory_region_del_subregion(mr, section.mr);
>>> +        /* memory_region_find add a ref on section.mr */
>>> +        memory_region_unref(section.mr);
>>> +        object_unparent(OBJECT(section.mr));
>>
>> I think this would cause a use-after-free when using MTTCG.  In general,
>> creating and dropping MemoryRegions dynamically can cause bugs that are
>> nondeterministic and hard to fix without rewriting everything.
> 
> Hi Paolo,
> 
> Thanks for your comment.
> Yes, I read in the docs that dynamically dropping MemoryRegions is badly
> broken when we use NULL as an owner because the machine owns it.
> But it seems nothing said this is the case with an owner.
> 
> But I think I see your point here:
>   * memory_region_unref will unref the owner.
>   * object_unparent will unref the memory region (which should be 1).
>   => the region will be dropped immediately.
> 
> Doesn't hotplug use dynamic MemoryRegion? In which case we better
> make that work with MTTCG. I wonder if we can't simply handle that
> with a safe_work for this case?

Hot-unplug works because the backing memory is only freed when the
device gets instance_finalize, so at that point the region cannot have
any references.

What can go wrong is the following (from the docs):

- the memory region's owner had a reference taken via memory_region_ref
  (for example by address_space_map)

- the region is unparented, and has no owner anymore

- when address_space_unmap is called, the reference to the memory
  region's owner is leaked.

In your case you have phys_section_add/phys_section_destroy instead of
address_space_map/unmap, but the problem is the same.

I am a bit unsure about using the same lqspi_buf for caching different
addresses, and about using the same buffer for MMIO execution and read.
What if you allocate a different buffer every time
lqspi_request_mmio_ptr is called (adding a char* argument to
lqspi_load_cache) and free the old one from the safe_work item, after a
global TLB flush?  Then you don't need the Notifiers which were the
complicated and handwavy part of my proposal.

Paolo

> 
> BTW the tests I have seems to pass without issues.
> 
>>
>> An alternative design could be:
>>
>> - memory_region_request_mmio_ptr returns a MemoryRegionCache instead of
>> a pointer, so that the device can map a subset of the device (e.g. a
>> single page)
> 
> I'm not aware of this MemoryRegionCache yet, it seems pretty new.
> I'll take a look.
> 
> Thanks,
> Fred
> 
>>
>> - memory_region_request_mmio_ptr and MemoryRegionOps.request_ptr accept
>> a Notifier
>>
>> - the device adds the Notifier to a NotifierList.  Before invalidating,
>> it invokes the Notifier and empties the NotifierList.
>>
>> - for the TLB case, the Notifier calls tlb_flush_page.
>>
>> I like the general idea though!
>>
>> Paolo
>>
>>> +    }
>>> +}
> 
> 
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]