[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC][PATCT 0/5 v2] dump memory when host pci device is

From: Wen Congyang
Subject: Re: [Qemu-devel] [RFC][PATCT 0/5 v2] dump memory when host pci device is used by guest
Date: Tue, 13 Dec 2011 17:20:24 +0800
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv: Gecko/20100413 Fedora/3.0.4-2.fc13 Thunderbird/3.0.4

At 12/13/2011 02:01 PM, HATAYAMA Daisuke Write:
> From: Wen Congyang <address@hidden>
> Subject: Re: [Qemu-devel] [RFC][PATCT 0/5 v2] dump memory when host pci 
> device is used by guest
> Date: Tue, 13 Dec 2011 11:35:53 +0800
>> Hi, hatayama-san
>> At 12/13/2011 11:12 AM, HATAYAMA Daisuke Write:
>>> Hello Wen,
>>> From: Wen Congyang <address@hidden>
>>> Subject: [Qemu-devel] [RFC][PATCT 0/5 v2] dump memory when host pci device 
>>> is used by guest
>>> Date: Fri, 09 Dec 2011 15:57:26 +0800
>>>> Hi, all
>>>> 'virsh dump' can not work when host pci device is used by guest. We have
>>>> discussed this issue here:
>>>> http://lists.nongnu.org/archive/html/qemu-devel/2011-10/msg00736.html
>>>> We have determined to introduce a new command dump to dump memory. The core
>>>> file's format can be elf.
>>>> Note:
>>>> 1. The guest should be x86 or x86_64. The other arch is not supported.
>>>> 2. If you use old gdb, gdb may crash. I use gdb-7.3.1, and it does not 
>>>> crash.
>>>> 3. If the OS is in the second kernel, gdb may not work well, and crash can
>>>>    work by specifying '--machdep phys_addr=xxx' in the command line. The
>>>>    reason is that the second kernel will update the page table, and we can
>>>>    not get the page table for the first kernel.
>>> I guess still the current implementation breaks vmalloc'ed area that
>>> needs page tables originally located in the first 640kB, right? If you
>>> want to do so in a correct way, you need to identify a position of
>>> backup region and get data of 1st kernel's page tables.
>> I do not know anything about vmalloc'ed area. Can you explain it more
>> detailed?
> It's memory area not straight-mapped. To read the area, it's necessary
> to look up guest machine's page tables. If I understand correctly,
> your current implementation translates the vmalloc'ed area so that the
> generated vmcore is linearly mapped w.r.t. virtual-address for gdb to
> work.

Do you mean the page table for vmalloc'ed area is stored in first 640KB,
and it may be overwriten by the second kernel(this region has been backed up)?

> kdump saves the first 640kB physical memory into the backup region. I
> guess, for some vmcores created by the current implementation, gdb and
> crash cannot see the vmalloc'ed memory area that needs page tables

Hmm, IIRC, crash do not use CPU's page table. gdb use the information in
PT_LOAD to read memory area.

> placed at the 640kB region, correctly. For example, try to use mod
> sub-command. Kernel modules are allocated on vmalloc'ed area.
> I have developped a very similar logic for sadump. Look at sadump.c in
> crash. Logic itself is very simple, but debugging information is
> necessary. Documentation/kdump/kdump.txt and the following paper
> explains backup region mechanism very well, and the implementaion
> around there remains same now.

Hmm, we can not use debugging information on qemu sied.

>   http://lse.sourceforge.net/kdump/documentation/ols2oo5-kdump-paper.pdf
> On the other hand, have you written patch for crash to read this
> vmcore? I expect it's possible by a little fix to kcore code.

crash can read this vmcore without any change.

Wen Congyang.

>> Do you mean dump guest's memory while it is running(do not stop the guest)?
>> If so, this command can not be used for creating live dump.
> I mean dump that keeps machine running as you say.
> Do you have plan for live dump?
> Thanks.
> HATAYAMA, Daisuke

reply via email to

[Prev in Thread] Current Thread [Next in Thread]