qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 05/11 v10] Add API to get memory mapping


From: HATAYAMA Daisuke
Subject: Re: [Qemu-devel] [PATCH 05/11 v10] Add API to get memory mapping
Date: Tue, 27 Mar 2012 10:01:07 +0900 ( )

From: Wen Congyang <address@hidden>
Subject: Re: [PATCH 05/11 v10] Add API to get memory mapping
Date: Mon, 26 Mar 2012 10:44:40 +0800

> At 03/26/2012 10:31 AM, HATAYAMA Daisuke Wrote:
>> From: Wen Congyang <address@hidden>
>> Subject: Re: [PATCH 05/11 v10] Add API to get memory mapping
>> Date: Mon, 26 Mar 2012 09:10:52 +0800
>> 
>>> At 03/23/2012 08:02 PM, HATAYAMA Daisuke Wrote:
>>>> From: Wen Congyang <address@hidden>
>>>> Subject: [PATCH 05/11 v10] Add API to get memory mapping
>>>> Date: Tue, 20 Mar 2012 11:51:18 +0800
>>>>
>>>>> Add API to get all virtual address and physical address mapping.
>>>>> If the guest doesn't use paging, the virtual address is equal to the 
>>>>> phyical
>>>>> address. The virtual address and physical address mapping is for gdb's 
>>>>> user, and
>>>>> it does not include the memory that is not referenced by the page table. 
>>>>> So if
>>>>> you want to use crash to anaylze the vmcore, please do not specify -p 
>>>>> option.
>>>>> the reason why the -p option is not default explicitly: guest machine in a
>>>>> catastrophic state can have corrupted memory, which we cannot trust.
>>>>>
>>>>> Signed-off-by: Wen Congyang <address@hidden>
>>>>> ---
>>>>>  memory_mapping.c |   34 ++++++++++++++++++++++++++++++++++
>>>>>  memory_mapping.h |   15 +++++++++++++++
>>>>>  2 files changed, 49 insertions(+), 0 deletions(-)
>>>>>
>>>>> diff --git a/memory_mapping.c b/memory_mapping.c
>>>>> index 718f271..b92e2f6 100644
>>>>> --- a/memory_mapping.c
>>>>> +++ b/memory_mapping.c
>>>>> @@ -164,3 +164,37 @@ void memory_mapping_list_init(MemoryMappingList 
>>>>> *list)
>>>>>      list->last_mapping = NULL;
>>>>>      QTAILQ_INIT(&list->head);
>>>>>  }
>>>>> +
>>>>> +#if defined(CONFIG_HAVE_GET_MEMORY_MAPPING)
>>>>> +int qemu_get_guest_memory_mapping(MemoryMappingList *list)
>>>>> +{
>>>>> +    CPUArchState *env;
>>>>> +    RAMBlock *block;
>>>>> +    ram_addr_t offset, length;
>>>>> +    int ret;
>>>>> +    bool paging_mode;
>>>>> +
>>>>> +    paging_mode = cpu_paging_enabled(first_cpu);
>>>>> +    if (paging_mode) {
>>>>
>>>> On SMP with (n)-CPUs, we can do this check at most (n)-times.
>>>>
>>>> On Linux, user-mode tasks have differnet page tables. If refering to
>>>> one page table, we can get one user-mode task memory only. Considering
>>>> as much memory as possible, it's best to reference all CPUs with
>>>> paging enabled and walk all the page tables.
>>>>
>>>> A problem is that linear addresses for user-mode tasks can inherently
>>>> conflicts. Different user-mode tasks can have the same linear
>>>> address. So, tools need to distinguish each PT_LOAD entry based on a
>>>> pair of linear address and physical address, not linear address
>>>> only. I don't know whether gdb does this.
>>>
>>> gdb only can process kernel space. Jan's gdb-python script may can process
>>> user-mode tasks, but we should get user-mode task's register from the kernel
>>> or note, and convest virtual address/linear address to physicall address.
>>>
>> 
>> After I send this, I came up with the problem of page tabel coherency:
>> some page table has not updated yet so we see older ones. So if we use
> 
> Tha page table is older? Do you mean the newest page table is in TLB and
> is not flushed to memory?
> 

I say vmalloc() in most part. (to be honest I don't know other
possibility now) In stable state of kernel, page tables are allocated
when user processes are created (around dup_mm()?, IIRC), where part
for kernel space is copied from init_mm.pgd. They are updated at
runtime coherently from init_mm.pgd when page fault happens. I
expressed the page table that has not updated yet as old. For this
reason, paging can lead to different result for differnet CPU.

>> all the page tables referenced by all CPUs, we face inconsistency of
>> some of the page tables. Essentially, we cannot avoid the issue that
>> we see the page table older than the actual even if we use only one
>> page table, but if restricting the use of page table to just one, we
>> can at least avoid the inconsistency of multiple page tables. In other
>> words, we can do paging processing normally though the table might be
>> old.
>> 
>> So, I think
>> - using page tables for all the CPUs at the same time is problematic.
>> - using only one page table of the exsiting CPUs is still safe.
>> 
>> How about the code like this?
>> 
>>   cpu = find_cpu_paging_enabled(env);
> 
> If there are more than two cpu's paging is enabled, which cpu should be 
> choosed?
> We cannot say which one is better than another one.
> 

I think so too. But now it sees only one CPU. Seeing all CPUs in order
can increase possibility to do paging, which must be better if users
want to do paging.

Thanks.
HATAYAMA, Daisuke




reply via email to

[Prev in Thread] Current Thread [Next in Thread]