qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-arm] [PATCH 03/16] tlb_set_page_with_attrs: Take argument spec


From: Peter Maydell
Subject: Re: [Qemu-arm] [PATCH 03/16] tlb_set_page_with_attrs: Take argument specifying AddressSpace to use
Date: Fri, 6 Nov 2015 13:41:54 +0000

On 6 November 2015 at 13:27, Edgar E. Iglesias <address@hidden> wrote:
> On Thu, Nov 05, 2015 at 06:15:45PM +0000, Peter Maydell wrote:
>> Add an argument to tlb_set_page_with_attrs which allows the target CPU code
>> to tell the core code which AddressSpace to use.
>>
>> The AddressSpace is specified by the index into the array of ASes which
>> were registered with cpu_address_space_init().

>> --- a/exec.c
>> +++ b/exec.c
>> @@ -445,12 +445,13 @@ MemoryRegion *address_space_translate(AddressSpace 
>> *as, hwaddr addr,
>>
>>  /* Called from RCU critical section */
>>  MemoryRegionSection *
>> -address_space_translate_for_iotlb(CPUState *cpu, hwaddr addr,
>> +address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
>>                                    hwaddr *xlat, hwaddr *plen)
>
> Does it make sense to replace the CPUState argument with an AddressSpace *
> and have the callers do the cpu->cpu_ases[asidx]?
> It would be more consistent and eventually maybe eliminate the need for
> address_space_translate_for_iotlb in favor of calling address_space_translate
> directly?

We can't accept an arbitrary AddressSpace, it has to be one which is
embedded in a CPUAddressSpace and which we can thus find the
memory_dispatch for. So you could pass a CPUAddressSpace*, but not
an AddressSpace*. But to pass a CPUAddressSpace we would have to
expose the currently-private-to-exec.c layout of the CPUAddressSpace
struct. I chose not to do that (and you can see the results elsewhere
in the patch series, like the function that's basically just "do
the cs_ases array lookup for me"); there's an argument for making
the structure more widely available to avoid some of that.

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]