qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/4] dump-guest-memory: correct the vmcores


From: Luiz Capitulino
Subject: Re: [Qemu-devel] [PATCH 0/4] dump-guest-memory: correct the vmcores
Date: Tue, 30 Jul 2013 14:51:27 -0400

On Mon, 29 Jul 2013 16:37:12 +0200
Laszlo Ersek <address@hidden> wrote:

> (Apologies for the long To: list, I'm including everyone who
> participated in
> <https://lists.gnu.org/archive/html/qemu-devel/2012-09/msg02607.html>).
> 
> Conceptually, the dump-guest-memory command works as follows:
> (a) pause the guest,
> (b) get a snapshot of the guest's physical memory map, as provided by
>     qemu,
> (c) retrieve the guest's virtual mappings, as seen by the guest (this is
>     where paging=true vs. paging=false makes a difference),
> (d) filter (c) as requested by the QMP caller,
> (e) write ELF headers, keying off (b) -- the guest's physmap -- and (d)
>     -- the filtered guest mappings.
> (f) dump RAM contents, keying off the same (b) and (d),
> (g) unpause the guest (if necessary).
> 
> Patch #1 affects step (e); specifically, how (d) is matched against (b),
> when "paging" is "true", and the guest kernel maps more guest-physical
> RAM than it actually has.
> 
> This can be done by non-malicious, clean-state guests (eg. a pristine
> RHEL-6.4 guest), and may cause libbfd errors due to PT_LOAD entries
> (coming directly from the guest page tables) exceeding the vmcore file's
> size.
> 
> Patches #2 to #4 are independent of the "paging" option (or, more
> precisely, affect them equally); they affect (b). Currently input
> parameter (b), that is, the guest's physical memory map as provided by
> qemu, is implicitly represented by "ram_list.blocks". As a result, steps
> and outputs dependent on (b) will refer to qemu-internal offsets.
> 
> Unfortunately, this breaks when the guest-visible physical addresses
> diverge from the qemu-internal, RAMBlock based representation. This can
> happen eg. for guests > 3.5 GB, due to the 32-bit PCI hole; see patch #4
> for a diagram.
> 
> Patch #2 introduces input parameter (b) explicitly, as a reasonably
> minimal map of guest-physical address ranges. (Minimality is not a hard
> requirement here, it just decreases the number of PT_LOAD entries
> written to the vmcore header.) Patch #3 populates this map. Patch #4
> rebases the dump-guest-memory command to it, so that steps (e) and (f)
> work with guest-phys addresses.
> 
> As a result, the "crash" utility can parse vmcores dumped for big x86_64
> guests (paging=false).
> 
> Please refer to Red Hat Bugzilla 981582
> <https://bugzilla.redhat.com/show_bug.cgi?id=981582>.
> 
> Disclaimer: as you can tell from my progress in the RHBZ, I'm new to the
> memory API. The way I'm using it might be retarded.

Series looks sane to me, but the important details go beyond my background
in this area, so I'd like an additional Reviewed-by before applying this
to the qmp-for-1.6 tree.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]