[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4 0/9] Make 'dump-guest-memory' dump in kdump-c

From: Qiao Nuohan
Subject: Re: [Qemu-devel] [PATCH v4 0/9] Make 'dump-guest-memory' dump in kdump-compressed format
Date: Thu, 27 Jun 2013 15:11:09 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.5) Gecko/20120607 Thunderbird/10.0.5

Sorry for replying late.

On 06/20/2013 04:57 PM, Stefan Hajnoczi wrote:

Please link to the code that writes DISKDUMP kdump files on a physical
machine.  I only see the crash utility code to read the DISKDUMP code
but I haven't found anything in the Linux kernel, the crash utility, or
the kexec-utils code to actually write a DISKDUMP file.

I think you can refer to the following URL, and the kdump-compressed format is
described in the file called IMPLEMENTATION.


I understand why you need temporary files, but my questions stand:

Have you looked at using ELF more efficiently instead of duplicating
kdump code into QEMU?  kdump is not a great format for the problem
you're trying to solve - you're not filling in the Linux-specific
metadata and it's a pain to write due to its layout.

Why can't you simply omit zero pages from the ELF?

Why can't you compress the entire ELF file and add straightforward
decompression to the crash utility?

As I have said, the main purpose of this work is *reducing* the *size* of dump
file to make delivering dump file more conveniently.

Compared with migration, "memory only" dump has a feature regression without
compression and excluding zero pages. So the regression makes me feel it is
necessary to make these patches.

You asked about using ELF more efficiently. For implementing *excluding zero*
pages, *PT_LOAD* can be made use of. p_memsz and p_filesz fields of PT_LOAD
entry are used to describe memory size and the size of corresponding data in
dump file respectively. Blocks only get zero pages in it will have *p_filesz*
set to 0.

However, such implementation faces a problem: the number of PT_LOAD may
*exceed* the range. As zero pages occur *arbitrarily*, the number of PT_LOAD,
at the worst case, may reach the number of the total physical page frames. For
example, 256MB have 2^16 page frames may exceed the range of e_phnum. Although
sh_info is extended to store PT_LOAD when e_phnum is not enough, sh_info that
has 2^32 range may also exceed if the guest machine has more-than-16TB physical
memory (it won't occur soon, but it will happen one day).

OTOH, the reason why I chose kdump-compressed format is ELF doesn't support
compression and filtering yet. To implement compression and filtering on ELF,
we need to consider specific ABI among qemu, crash utility and makedumpfile.
After that, another work needs to be done to port them in specific

Compared kdump-compressed format with ELF, compression and filtering is
supported and we don't need to modify tools like crash and makedumpfile.
Considering these reasons, kdump-compressed format is better than ELF. It get
more merit.

According to your comments, I think your objection first comes from the
*temporary files*. What if temporary files won't be used? Flatten kdump-
compressed format is supported by crash and makedumpfile which offers a
mechanism to avoid seek in the case of sending data through *pipe*, then I
don't need to cache pages' data in temporary files.

And about the metadata(do you mean VMCOREINFO here?) you pointed out, it
contains debugging information related to kernel memory. The debugging
information is useful if present, because we can use it to filter more kinds of
memory. But now we only need to exclude zero pages and there's formal mechanism
between qemu and linux kernel, so without metadata can also satisfy our need.

Think over all of the above, I still chose kdump-compressed format. What's your opinion about this?

Qiao Nuohan

reply via email to

[Prev in Thread] Current Thread [Next in Thread]